Future Hybrid Mixed Reality Design Studio Environments

Jiaqi Wang
24 min readOct 7, 2020

--

Blending the digital and physical world

Brief:

There is extensive research and development in wearable mixed reality devices. Unlike virtual reality, which completely immerses you in a virtual world by obscuring your entire view, mixed reality blends real and virtual to create hybrid experiences. For this project we are looking 10 years out, assuming that head-mounted displays, such as Microsoft’s Hololens, are ubiquitous. Imagine that in the future head mounted displays might fit into thick glasses frames and are as commonplace as smart phones today.

The School of Design at Carnegie Mellon is hiring you to develop a mixed reality ​hybrid (people both in person and remote)​ design studio environment to enhance the studio learning experience. Focus on ​one aspect ​of the studio experience. You will be designing and developing a low-fi prototype of one interaction in a hybrid studio experience for the client to review.

Target audience:

Environments students and their classmates, as well as design faculty.

Questions to consider at the outset of the project:

What information about the studio is currently available in a traditional design studio and in what media?

How might students and faculty maintain awareness of each other without increasing interruptions?

What are some of the affordances of augmented reality?

Does additional infrastructure/technology need to be added to the environment to support AR? How does our physical environment impact how we use digital devices?

What are key “stories” that need to be told about what is happening in a design studio to students and faculty?

What information is each type of user hoping to learn about the studio?

Background

Traditional Studio supports:

Visible work ​It is easy to see what other people are working on by what is on their desk, sketched on the whiteboard, or pinned up on the wall.

Inspiration and exchange of ideas​ One could see what others were working on and discuss ideas in person.

Distraction​ Unfortunately, working in the same environment where multiple conversations are ongoing can be quite distracting.

Impromptu chats ​Professors or TAs might stop by between classes to check in with students.

Office hours​ Sometimes students might stop by for office hours with a TA or professor.

Social Cues​ One can easily see when someone seems busy or available for a conversation.

Cross-studio pollination​ One might be inspired by conversations with friends that stopped by for a visit from a different studio.

Social Bonding​ People can easily connect, inspire, and encourage each other while working in the same space. Students might take breaks together to Entropy and discuss their work over food and drink.

Social Support​ A word of encouragement might help someone else to have a breakthrough.

Online learning:

● Some conversations occur over Zoom and other online tools, while other conversations occur at a social distance wearing face-masks.

● We’ve used a Figma board for critiques to see what the whole class is working on at a glance.

● We’ve experimented with Gather.town to informally give people who are working on studio assignments opportunities to connect with studiomates.

● Missing is a single central location where students can work together outside of class hours.

● It’s difficult to be aware of what people are working on. Zoom makes it easy to see one’s face, but not their desk. It is difficult to quickly get a sense of what people are currently working on, what is on their desk.

● It’s difficult to know when people are available for a break or conversation.

● It’s difficult to have serendipitous unscheduled conversations with people outside of class time or meet in a common place between classes as one might in a traditional studio.

Kicking off the project, I reviewed the videos and websites about MR, VR, and AR to get myself more familiar with the technologies and the possible ways they can improve studio experience. Here are some notes I took:

Hololens 2:

Category: MR

  • Instinctual interaction — a more human way.
  • Comfortable to wear.
  • Automatically signed in — less barrier.
  • Tracking, calibrating to customized/unique size of the user.
  • The interface should be inviting (graphic visual feedback).
  • The “buttons” would be treated like real things. Think about how to mimic the physic sensation/ spacial reference/motion of the interface. The best way is to prototype as many as possible and try out which feels the best.
  • avatar of online participates.
spatial lab
  • spatial lab: people could pin up ideas on virtual walls to exchange ideas and create a shared understanding.
  • connecting the existing devices and internet search with MR. (Using a phone to generate a virtual sticky note and attached it to 3d model.)
  • avatars of remote participants

Tilt Five:

Category: AR

  • What is the benefit of having AR somewhere v.s. AR everywhere?
  • Field of view + sense of depth.
  • Super light-weighted, little thermal problems, high resolution, no lost frames.
  • Share with friends in the same room & online!

VR Interface Design

Category: VR

Focus: user interfaces for ergonomic multitasking in virtual reality.

Benefits of using HMD(head-mounted display for work):

  • more screen space.
  • less interruption
  • z-depth helps people to organize stuff spatially.
  • Happier worker.

Ergonomic interface:

  • be aware of people’s field of view and touch zone. Try to match the content display zone with the user’s comfort zone.
  • Hierarchical division of space in VR: The main content, peripheral content, and curiosity zone.

After I reviewed how each of these technologies works and what it can achieve, perhaps the next important thing is to learn the design approach and process for creating mix-reality.

Design Approach for Mix reality

Generating ideas could be challenging when tackling technical uncertainty inherent with new tools and technologies. The article Expanding the Design Process for Mixed Reality talks about techniques that support the workflow for people who are unfamiliar with using 3D tools.

  • Generating ideas with bodystorming: a playing field for participants from different backgrounds to uncover opportunities in 3D space.
  • Acting and expert feedback: experience the thinking through the perspective of the user & outside observers to see how events play out.
  • Capturing ideas with storyboards: Low-fidelity storyboards for quick discussions; high-fidelity storyboards for capturing difficult-to-describe ideas and artistic direction.

Initial Research

I started off by interviewing some of my classmates. I did not have a clear idea about which direction I want to focus on, so the conversation I had was more fluid and flexible. Through talking to people about their frustrations and special circumstances, I discovered many aspects of the studio environment and online learning experience.

Click here to view interview documentation in detail.

some screenshots of interviewing other design students via messenger

Summary of the interviews:

I interviewed 5 people. Specifically, Max, Se A, and Grace are studying remotely in three different time zones. Daniel and Rachel are located in Pittsburgh, taking hybrid classes. Daniel lives on campus whereas Rachel lives off-campus. I think their situations basically covers all the possible settings.

time zone map of five interviewees + myself

In general, the topics we went through include;

  • Living situation: living with families/ living with roommate(s)/ living by yourself.
  • Sleeping schedule: Morning person/ night owl/ varied/ conconventional.
  • Balancing other things in life v.s. studio work.
  • How much healthy lifestyle is valued: exercise/mindfulness/ stress management.
  • Need for social interaction.
  • Technology intensity of work.
  • Painpoints.

My situation:

I am located in Beijing, China. I had been studying from home for the first 2 weeks. Starting from the third week, I gradually transformed my workspace from my bedroom to a dorm room at Tsinghua University. Luckily, CMU reached out to a University in Beijing and I got a chance to join a visiting student program.

The bright side is that I can use the University’s facilities, get in touch with some freshman CMU students and I can randomly hop in any lectures ( I also need to take one course).

However, I am not able to access a studio and my dorm room is not big enough for making models. I need to travel back and forth between home and dorm to get different tasks done. Besides, since I have a roommate, I need to accommodate her schedule so sometimes my plan and sleeping schedules were interrupted. In general, the 12-hour time difference potentially offers me more opportunities to get ready before class, but it also gives me the illusion that I have plenty of time for work. I realized that daily activities like eating and moving from one place to another took me longer than last semester. Also, I feel like since I’m not working with others in the same studio anymore, it’s easy to forget or underestimate how much effort I need to put into studio work.

User Personas:

Base on my observation and my own situation, I found some of the qualities like the need for social interaction could be seen on a spectrum whereas other aspects like living situations are more defined. I took this consideration into the process of creating the three personas. In addition, special backgrounds like being a transfer student or someone who took a gap year should be put into consideration since they might have different pain points and advantages compare to other sophomores.

three different personas

Brainstorming ideas:

My concentration would be on how to boost the learning experience of remote studying. Some of the directs I am interested in include: Visible work, Social Bonding, and Social Cues.

Storyboard explorations:

Storyboard 1

Category: MR, Hololens.#visible work

My first storyboard is focused on visible work. In this scenario, when the girl got stuck, she could easily access and zoom in to look at what other students are working on or have worked on by that moment. Yes, a realtime update of progress. I assume that everyone would be using the same kind of drawing tablet that only supports working without additional function. This tablet mimics the function of a sketchbook in real life, which only serves the single function of drawing. Sometimes, having as many functions as possible is not necessarily a good thing given the distractions it could cause.

storyboard 1

Storyboard 2

Category: AR, Tilt Five.#Social bonding

This storyboard is about reducing digital frictions and the entry barrier of social bonding. The core concept is having there realistic “window” on the studio wall and next to remote people’s workspace that serves as the threshold between the two spaces. It is different from zoom in many ways:

  1. It requires minimum effort to reach out to others-just put the glasses on and they are right across the “window”.
  2. High resolution and realistic 3D visual effect. The perspective changes according to where the person stands.
  3. Multiplayers. Tilt Five supports multiple users seeing and interacting with the same subject. Whereas in zoom everyone is separated into different “boxes”, and people who are in the same room cannot fit into one screen, this design brings everyone together in a more natural way.
storyboard 2

When I was watching the video in which the developer of Tilt Five explained the technology more in-depth, I was really inspired by the question of what is the difference between having AR somewhere v.s AR everywhere. I guess one of the benefits of having AR bounded in a certain area is that the information presented would be more manageable. Psychologically speaking, it’s easier for people to memorize where to find certain information if it has a physical representation that we are more familiar with in real life, like the folder icon on our desktop.

Oct 8th

To-Dos:

Document the deskspace where you work.

Take screenshots of your desktop (and other digital devices — phone/tablet) when you are doing design work.

What programs do you use to work, communicate, socialize? Make sketches of how you get tasks done.

Desk Space Documentation:

My physical desktop workspace

Apparently, my working desk in the dorm is very crowded and small. I usually start a task by writing down a to-do list and the deadlines for each task. Oftentimes I underestimated the amount of time they require, but this is a way to keep me aware of time.

digital workspace

In my digital workspace, I have a lot of tabs and two different browsers opened. I also constantly switch from one desktop to another, but I’m confused about how they are organized and what I should see each time I swiped left or right.

programs/ apps I use to work/communicate/socialize

I made a coordinate axis to analyze the programs/apps that I often use. The y-axis is a spectrum of work and life, and the x-axis is a spectrum of digital and physical. The apps located right on the axis are the transition between two ends. For example, the CAMSCANNER helps me to scan my physical drawings and convert them into digital images.

sketch of my working habits

Despite how small my space is, I basically divided it into two areas:

  1. Chill space: I have a smaller table on my bed. Usually, I would choose to do tasks that require less focus/tension.
  2. Consentration space: This is a more formal workspace. It’s also more comfortable to draw when sitting on a chair that is ergonomic.

Self-reflection:

As the prevalence of digital media in our physical environments increases daily, what is the role and/or responsibility of designers in shaping our environments?

Unlike the physical world, the environment that people build in digital media is entirely artificial. The system, the visual effect, and the interactions are all human-made. This means the designers behind the scene is playing the role of “god”, who decides the way everything works in the digital world. Having a world designed by humans is a double-sided sword. As much as this mechanism could benefit the users by making the interface more understandable and accessible to people with different disabilities, it could also subconsciously introduce problematic mindset and behaviors. Specifically, on the bright side, designers are expected to make strong observations about how people work, how the existing world work, and the intricate relationship between them. Based on these observations, find the best way to blend or connect the natural world and the digital world. For example, by understanding the psychology features of human cognition, designers should find the most efficient way to present information so that users can complete tasks with a minimum amount of cognition load.

On the other hand, since the market has an increasing demand for the digital media world, the developers could be market-driven, leading to the cross-company competition for users’ attention. Their products will be designed to make users more and more obsessive and thus detached from their life.

Oct 8th

Takeaways from class:

  • The business model-free service is not free. The system could be collecting data about the user to help the algorithm decide which advertisement to show.
  • The role of designer-employee? customer? advertiser? — what is the goal?
  • The inevitability of digital media in the physical world. The question about could/should. What are the impacts of the design decision and are they beneficial?
  • Ethical and moral role of designers — where to draw the lines?
  • How the environment supports interaction: Little detail could determine how stress people are.
  • Universal standards of digital language/ element to support understanding.
  • Private v.s. Public.
  • How to leverage mixed reality base on the existing world.
  • What are the thresholds?

During lab time, we were introduced to sketch videos and got on board in discord.

Oct 10

Over the weekends, I had more time exploring potential interactions, so I did more in-depth research to know what affordance could the current technologies provide and how different development teams have been tackling this complex issue.

More research:

1.Google AR & VR

Using Cloud Anchors, the app lets users add virtual objects to an AR scene. Multiple users can then view and interact with these objects simultaneously from different positions in a shared physical space.

How it works:

  1. The user creates a local anchor in the environment.
  2. During hosting, ARCore uploads data for the anchor to the ARCore Cloud Anchor service, which returns a unique ID for that anchor.
  3. The app distributes the unique ID to other users.
  4. During resolving, users with the unique ID can recreate the same anchor using the ARCore Cloud Anchor service.

Similarly, Microsoft also supports multiviewer interactions by using Azure Spatial Anchors. Both platforms support devices with different systems( iOS, Android…)

Shared experiences in Unity by Microsoft

2. VROOM

VROOM, which stands for Virtual Robot Overlay for Online Meetings, is a way to synthesize AR, VR, and telepresence robots in an amalgam of digital and physical technologies designed to create more immersive presences for distributed workers in the office.

How it works:

The mixed reality headset tracks the worker’s position and tracks the movement of their head, while a telepresence robot performs a stereoscopic scan of the worker’s surrounding environment. A Unity app animates an avatar of the worker using a Mixed Reality headset, so the worker using a virtual reality headset can see their avatar in real-time through their headset.

Hand gestures and arm movements are recorded by controllers and seen by both participants. The system adds mouth movement when people are talking, as well as blinking and idle movements to make the avatar seem more lifelike.

3. Codec avatars

Facebook development team announced that they can build codec avatars that are indistinguishable from the humans they represent — and may be a staple of our virtual lives sooner than we think.

virtual conversation using codec avatars

How it works:

The point is to capture as much information as possible (Mugsy and the Sociopticon gather 180 gigabytes every second) so that a neural network can learn to map expressions and movements to sounds and muscle deformations, from every possible angle.

4. Mixed Reality Capture Studios

Microsoft Mixed Reality Capture Studios enable content creators to create 360-degree holograms from real life subjects that can be use in applications across augmented reality, virtual reality, and 2D screens.

5. Shared Experiences in Mixed Reality

6 questions to bear in mind:

1. How are they sharing? Presentation? Collaboration? Guidance?

2. What is the group size?1:1? Small(<7)?Large(>7)?

What it could influence:

  • Representations of people in holographic space
  • Scale of objects
  • Scale of environment

3. Where is everyone? In the same room? Remote? Both?

What it could influence:

  • How people communicate? (Whether they should have avatars?)
  • What objects they see. Are all objects shared?
  • Whether we need to adapt to their environment?

4. When are they sharing? Synchronously? Asynchronously? Both?

What it could influence:

  • Object and environment persistence.
  • User perspective.

5. How similar are their physical environments?

What it could influence:

  • How people will experience these objects. For example: If your experience works best on a table and the user has no table? Or on a flat floor surface but the user has a cluttered space.
  • Scale of the objects. For example: Placing a 6 feet human model on a table could be challenging but a heart model would work great.

6. What devices are they using?2D? 3D? Mixed?

Storyboard3

Category: Hololens. #inspiration and exchange of ideas #collaboration

Recalling the experience of collaborating with other students from the last semester via zoom, I found some problems that could be improved by a mixed reality device. When working on adobe software separately, it was hard to put together everyone’s work together to see how it looks. It involves each person uploading their part of work and usually, someone has to do extra work stitching them together. If something went wrong, then the whole process needs to be repeated again. It was very time-consuming and took away the benefit of teamwork.

This could be even worse when working on a 3D project like the hybrid exhibition, for example. There are so many iterations of Sketchup models and physical models. It would be difficult for multiple people to observe the 3D rendering through the laptop screen without competing for the mouse controller. Not to mention adding their modification/section into the model.

storyboard3

In this scenario, an augmented 3D model of the exhibition would be placed on the desktop, and all the changes would be updated in realtime via the cloud. Using something similar to Cloud Anchors, both co-located people and remote people could make changes to the model simultaneously, so each member of the team does not need to make individual models when the team only needs one for the final deliverable. The scaling function allows the students to get a better grasp of space and see things in detail.

Storyboard4

Category: Hololens.

storyboard4

Video Sketch 1

This assignment brought me straight back to the time-based instruction project, a lot of considerations like lighting, composition, transition, point of view, and pacing came back to my mind. Also, picking up my after-effect skills reminded me of the struggle I had in the animal project.😅

I wanted to further develop my ideas in storyboard 2, in which students in the studio and outside of studio can reach out to each other easily and naturally as if remote students are just one a window away.

video sketch 1

In the process of making video sketches, I needed to nail down more details and specific procedures. Some usability issues that the storyboard did not reveal were exposed here:

  1. Privacy concerns. Is it comfortable to be observed all the time?

solution: when the remote person is not aware that someone is standing in front of him/her, the “window” will be foggy. If students in the studio want to chat with this person, just knock on the “window”, and the foggy filter will be removed as soon as the remote person put on the glasses.

2. How is it different from a giant screen?

Having a z-depth property gives the viewer a sense of space. If there are multiple participants, each of them can see the subject from their only perspective/angle without distortion.

Additionally, the board is a lot more affordable and adjustable than a screen.

Multiple Uses of Tilt Five Game Board

“With the XE Game Board you get even more space to play as well as multiple configurations. Prop one end up with the kickstand and you suddenly get infinite depth and additional height. Configure it as a square board, or add the additional section and convert it into a 42 inch long board.”

3. How similar are their physical environments?

The lighting might be too different from each other, which makes the window seems unrealistic.

The moving paintings from Harry Potter

p.s: I later realized that my envision is very similar to the moving paintings in the magic world.😳

Sketch video 2

After struggling so much using roto brush in After-effect to create the layers, I decided to go with a first-person point of view. Also, I was kind of embarrassed to ask random strangers filming me doing strangle gestures to the air.

Feedback & Reflection:

For the first sketch video, I got a lot of questions regarding the board: Why do I need to have a board/What can it do? How is it different from a giant screen? How does the window system work?

I reviewed the tech-breakdown video for Tilt Five to double-check if the board is playing an important role in this interaction.

Tilt Five uses a unique system compares to other existing systems. This system projects out to a special game board, which is called a retroreflector that allows the headset to be really high performance, lightweight, and having a super-wide field of view. The projector embedded in the glasses projects signals to the retroreflector and it comes back to each user and allows the focus of these pixels to be correct. This means that the board itself is not giving off lights, but the projector in the glasses sends the visuals into the viewer’s eyes. Also, once you put on the glasses, the system will be automatically opened. No calibration, complex setup, or sensors in the room.

The board is just a reflector. nothing appears on the board. What the viewers are seeing is the projection from the glasses.

Things I can push forward:

  • Explain the board’s functions.
  • Show the difference by using comparison.
  • Think about what people could do with this window system.
  • Make sketch videos that cover the other aspects (switching windows for remote-to-remote interaction in addition to remote-to-studio interaction)
  • emphasis the advantage of using the board (affordable/ portable/ precise/ effortless)

Oct 15

Sketch video 1 Iteration 2

Iteration 2

In this sketch video, I try to show the interaction in detail by making the procedure more explicit. In addition, I think I should take advantage of different technologies. Before, I was mostly referencing AR using tilt five( content bounded within the board, not floating in the air). Although it can create an illusion that the remote people are just a window away because of the sense of depth it gives, it also limited what people could do with the window. I decided to assume that the head-mounted display in the future could incorporate functions from multiple systems and still be lightweight. Specifically, tilt five glassess+hololens. This allows users to make changes to each others’ space and have things “floating” in the air instead of bonded to the window frame. For example, they could share their work by throwing it to the window and the other side will receive it in his/her place.

One thing I need to decide ASAP is which direction do I want to dive more into. For now, this interaction could potentially be support Inspiration and exchange of ideas, Social bonding, and provide Social cue.

Takeaways from today’s class:

  • what feels more genuine.
  • a mixture of functions.
  • layering the reality, move beyond the screen.
  • how to leverage 3D space to solve pain points in 2D.
  • reduce friction, bridge the gap
  • what are the input/output devices

Prototyping with videos:

  • Some hacks or things to try: try to use a green screen; maybe even paint the tools into green; make physical models and do stop motion animation.
  • start shooting during the weekends, storyboard first.

Oct 19

For the next sketch video, I decided to focus on social bonding and social cue instead of sharing work, so I started to think about games and fun experiences that MR&AR could afford.

Existing AR Games/Experience:

Board Games (for a group of people):

Since Tilt Five is specially designed for all kinds of board games, I could take advantage of that. By just adding another game board placed on the table(not on the walls like the windows for communications), remote people and those who are in studio can take a break from work easily.

Overall, board games could be divided into many categories, but not all of them are suitable for us. I picked 3 board games that are most popular among my peers, and I might prototype how one of the games would feel like in my sketch video.

Avalon, The Werewolves of Millers’ Hollow, pandemic

pictures of the three board games.

I decided to use The Werewolves of Millers’s Hollow as an example in the sketch video. Usually, this game requires at least 5–6 players. Although there are up to up to 10–20 characters depending on which version you go with, usually 5 characters are enough to keep the games going when there aren’t enough players, which are: The Captain(1), The Simple Villager(1–2), The Werewolf(at least 2), The Fortune Teller(1), The Witch(1). In addition, usually, there will be a participate taking the role of a “god” to distribute the identity cards and keep the orders, but this person now could be replaced by the AI instructor. In fact, I believe that the online version of this game already has an automatic “god”.

I was also thinking how could MR could enhance the entertainment experience in the ways that other formats cannot. In other words, what are the benefits of playing board games in MR and how to emphasis that?

Traditionally, players receive instructions and messages from the “god” to know their situations like the time of the day in the game(sunrise or nightfall), who is killed last night, and what is the identity of the person who was voted out. Signing these information by words could be stiff and makes it hard for the players to step into their roles immediately. With MR, the surroundings and even the players’ appearance could be modified to create a more immersive experience.

Different filters on Snapchat that could be used in this game

Others:

Sometimes, I feel like MR could be the closest thing to magic. Also thinking back to how the windows feel like the moving painting in Hogwarts, I’d like to explore further how to create a magical experience with MR.

The first thing that came into my mind is the Patronus spell. The Patronus is pleasing to look at, and one would feel a personal connection to his/her own Patronus because it is uniquely tied to this person’s personalities. I’m imagining the interaction to be subtle and relaxing. For example, remote people could send their Patronus to hop around the studio, and vice versa. This could be a simple greeting, or a signal saying that “hey, I’m free for a chat.” Others could respond by sending their Patronus back.

Casting a Patronus Spell

For a more intense and engaging experience, I chose martial magic.

magic combat with other players

Planning the shots/Storyboard:

Part 1:

a quick sketch of storyboard part I
a quick sketch of storyboard part II

Self-reflection

How were the skills you developed in the first project similar and/or different from the second project?

I think there are a lot of similarities between this project and the previous one. Some of the concepts we need to consider for these two are reoccurring but the focus might be different. For example, we learned to identify and be aware of what constitute a threshold. In the hybrid exhibit project, we might want to make the threshold more percevable for the viewers to signal the message that “you now entered a different space.” This could help the viewer to understand which section they are located at, and guild them to move through space in a cohesive manner. On the contrary, In the future hybrid mixed reality project, we aimed to lower the threshold between the virtual world and the physical world to create a more seamless studio experience.

We have also taken accessibilities into consideration in both projects, but one requires more in-depth research than the other. I guess it’s because the range of target users/audience is different. Whereas the exhibit was designed for more general public in Oakland, the hybrid studio was specifically designed for design students and faculties. Thus, for the first project, we thought of our audiences as a crowd not as individuals. The main focus was on how the user will interact with the content like what typesize and screen height are comfortable to look at. For the second project, we were more invested in researching individuals’ needs, painpoints, and so on, because the ultimate goal of our design is to benefit them: not some of them, but all of them. So we also developed personas and persona spectrum to improve diversity and inclusivity.

The biggest difference between the project would be the setting of time. Future hybrid studio project, as the name suggests, is more future-oriented, whereas the hybrid exhibit project is a temporary design that encourage people to take actions immediately. This difference on the horzion of time and stretch of time changes our focus of work. The first project was more based on existing technologies and tools, which allows us to make detailed and applicable prototypes like the Tinkercad tools. Also, we all more or less have the experience of visiting exhibitions, which gives us some common sense and standards of deliverables. However, in this project, we are dealing with a relatively new platform. As Davis mentioned in class, we are imaging the infrastructure of MR studio experience instead of the interior or decorations of it, so it requires a different mode of thinking.

What is your understanding of the role of an Environments designer?

An excitement and perhaps also a discomfort that Environment designers need to deal with is constantly tackling things that we don’t fully understand. For now, we touched on the topic of climate change and studio experience, which are familiar enough for us to design with confidence, but what if the topic gets really specialized and unrelatable? There is no way to become an expert in something overnight if you are not Tony Stark. From time to time I feel that I am caught in the middle of technology and design. Every problem is unique in a way, which requires different tool kits and knowledge to deal with. That’s why it’s so important for Environment designers to actively cooperate with people from different fields of expertise, and learn as we go.

Environments designers also carry the responsibility of taking moral and ethical considerations. Where do we draw the lines? Even if we could do it, should we? Is our design inclusive and encouraging good behaviors?…We should always give the benefit of asking ourselves these questions.

In a word, our role is not to predict the future, but to design it.

--

--