Research

We are interested in the many ways AI can be useful for designers. We are specialists on robotics, reinforcement learning, natural language processing, deep learning and generative design.

DS – DATASET | NLP – NATURAL LANGUAGE PROCESSING | DL – DEEP LEARNING | CV – COMPUTER VISION | RL -REINFORCEMENT LEARNING | GD – GENERATIVE DESIGN | R – ROBOTICS |


ADARIBERT


CV-NLP Our multimodal transformer model addresses two main tasks:

  • Ground high-level attributes in images under an object-agnostic approach
  • Use within-language attention mechanisms to find relevant sequences from unstructured text

ADARI dataset


DS We have created ADARI—Ambiguous Descriptions and Art Images—the first large scale self-annotated dataset of contemporary workpieces, with the aim to provide a foundational resource for subjective image description.




ADARI generative model


DL A deep multimodal learning model learns a joint space of the ADARI images and subjective descriptions

DeepCloud


DL DeepCloud is a data-driven modeling system that enables the user to quickly generate innovative and unexpected objects of any class in its database – such as cars, chairs, tables, and hats. It learns the latent characteristics of these classes and enables the user to manipulate them in a meaningful way. The DeepCloud toolkit is a web-based app that utilizes a Python server on the backend to run the ML model and uses a MIDI mixer to enhance user’s interactions.


All objects in the dataset are represented as sets of points in space, pointcloud. DeepCloud uses an autoencoder to learn how to compress them into a low-dimensional latent feature space and how to rebuild them back to the original space of the point clouds. In this process, it becomes a generative design system in which new objects (represented by point clouds) can be synthesized through the manipulation or combination of the vectors in the latent feature space.



Design intents prediction


NLPCV We have trained a model that predicts ambiguous design intents given an image.

Deep Rise


DL We have trained a deep network to explore new design outcomes of high-rise buildings.


We selected 31 cities in the world and focused on specific areas that have high-rise buildings: New York City, Chicago, Atlanta, Los Angeles, Miami, Philadelphia, Pittsburgh, Boston, Seattle, San Francisco, San Diego, Houston, Dallas, Baltimore, Detroit, Indianapolis, Denver, Vancouver, Toronto, London, Paris, Riyadh, Dubai, Abu Dhabi, Hongkong, Shanghai, Taipei, Bangkok, Singapore, Honolulu, Sydney. The 3D data of the buildings were extracted from OSM (Open Street Map). Within the dataset, we’ve isolated buildings taller than 75 meters as our definition of high rise buildings. The data collection resulted in a total of 4,956 high-rise buildings formatted as 3D OBJ models. In this research, all handling of 3D data took place in Rhinoceros 3D modeling software.



ARTISTIC STYLE ROBOTIC PAINTING


R-ML We have trained a robot to paint with the style of an artist.

A reinforcement learning algorithm learns to convert a painting into brushes strokes. A robot learns to translate the virtual strokes to real strokes, with the style of an artist via Learning from Demonstration.

Self-learning agents for architectural design (part 1)


To bring real-time interaction to the core of generative design, we propose a method to train computational agents to satisfy custom architectural requirements using multi-agent deep reinforcement learning (MADRL). Each agent is a spatial boundary that can adapt and interact in a common environment containing information about the design context (e.g. site, obstacles, buildings, etc.). This method enables the designer to train different types of agents without expert computational knowledge. Besides, the designers do not have to wait for output design alternatives, as they can interact directly with the agents and the environment as these alternatives are being developed. As a result, architectural configuration becomes the result of a semi-cooperative game between human designers and computational agents.

Example of residential designs for different sites
Spatial agents occupying the site on the top of a hill
Spatial agents responding to randomness and achieving different configurations



The dataset contains +700 examples of brushstrokes demonstrated by a user. Each brushstroke is availabel as a pair, 1) the sequence of brush tip motions in space including both location and orientation, 2) the scanned brushstoke as a grayscale image. Use this notebook to process and review data.

Brush motions were collected using a motion capture system and a costum-made rigid-body marker. The coordinations were processed later, thus the center of coordination system is located at the center of each cell. Brushmotions are saved as numpy array.

Brushstrokes are scanned and converted to fixed size images and saved as a numpy array.

SAM dataset


DS SAM—Strokes And Motions—is a bi-modal dataset that is curated to enhance the ongoing research on creative robotic and creative machine learning toolmaking research. The data set consists of pairs of brushstrokes, as pixel-based 2d arrays, as well as brush motions with 6 degree-of-freedom (DoF).

DID-PGH Dataset


DS We have trained both analytical and generative deep network to explore the recognition and generation of urban fabrics.


DID (Diagrammatic Image Dataset, Provisional Patented) is a format of urban dataset to train deep neural networks. DID is a synthesized dataset consisted of raster images with diagrammatic representation of two-dimensional building form and its neighboring urban contexts. Diagrammatic image representation in the dataset has offers two advantages in machine learning: low noise in the image and synthesis of information. The DID-PGH dataset is comprised of images, which includes the target buildings placed on the center of each image, their neighboring building footprints, street network, and the parcel shapes of Pittsburgh, PA.


%d bloggers like this: