Skip to main content

Playful Programmable Projection: Large Scale

Playful Programmable Projection: Large Scale

When we saw Team Lab’s installation “Sketch Aquarium,” we were so inspired by its aesthetics and immersive digital environment. It inspired us to create a large immersive environment with Scratch and projection, where people can interact with digital sprites and use coding to craft an interactive story or narrative. 

In May, we had an opportunity to exhibit at the Maker Faire in San Mateo. We took the opportunity to try out our half-baked idea: creating a programmable, playful projection environment. Below is a sketch of our workflow.

Draw it - Scan it - Code it - Project it

We set up a large screen and projected an underwater scene. Visitors were invited first to draw an underwater creature on a piece of paper. Next we would scan the drawing to upload it as a Scratch sprite. From there, visitors would work on coding their sprites in Scratch to animate their handmade creatures. When they finished coding, we added their animations to the communal underwater scene. (The technical details are in Steph’s post.)

We really liked that the playful projected environment created an intimate theatrical experience in the dark hall. When people entered the space, they saw the large projected screen full of hand-drawn sea creatures in motion. People enjoyed interacting with the scene by casting their shadow on the screen and interrupting the projection with other white objects, such as a Tyvek suit, a white hat, and a white umbrella that we brought with us. It was nice to see that the space was engaging people in multiple ways.


However, we also observed that there was room for improvement in terms of the activity workflow and setup. First of all, there were just too many little steps in between the Draw - Code - Project cycle, 

1) Draw a sprite

  • Scan it (with an iPad app Scannable)
  • Save it on iPad
  • Erase the background of the sprite (with an iPad app Background Eraser)
  • Save it on iPad
  • Air drop it to an individual laptop

2) Code it 

  • Upload your sprite to Scratch 
  • When you finishi coding, use Scratch Backpack function to send your sprite to the master computer
  • Retrieve it on the master computer

3) Project it

  • Tweak the code to adjust the motion, size, scale, and speed

Does having multiple steps interfere with tinkerability? I would say, not really. Switching between multiple apps to get your hand-drawn sprite ready to use in Scratch didn’t take so long, and most people enjoyed the steps as they could see the process of how their drawing got scanned and turned into a digital sprite (Of course, in an ideal world, it would be great if we were able to do it all on Scratch, but that was not a major issue here)

More importantly, what we noticed was that participants saw their sprites among the other creatures for the first time ONLY after it was projected, and they often didn’t have a sense of size/scale/speed. So it was natural that they wanted to go back and forth between coding and projecting to make changes once the sprite was launched, but they didn't. Why?  ...because as shown in the illustration here, there was only one projector to make the projection and they did not want to bother facilitators with asking to add a sprite again.

With this setup, it was hard to let participants edit their sprites in real time in response to what was happening on the communcal screen because we were using one master computer to project everyone's sprite. At each individual laptop, participants were supposed to "finish" coding and send their sprite to the master computer via the Backpacking tool on Scratch so that the facilitator at the master computer could receive it and upload it manually. This process seemed to discourage some people to go back and forth between coding and projecting. 

One of my favorite moments from the event was an interaction I saw between this boy and his family (see the photo below). They first made a red fish (you can see it projected on his right shoulder) and projected it onto the screen. Then they made a red crab to be friends with the fish. When the parents took photos, they asked me how to change the code on the master computer so that the two sprites would swim only at the bottom of the sea. We worked together at the master computer to  tweak the code and adjust the size and motion speed of the crab and fish. They wanted to make these changes only after seeing the sprites projected into the underwater environment. While these interactions were the most rewarding from the event, this type of facilitation caused a bottleneck when serving many people in the space. 

Since then, our team has been testing out various setups such as using multiple smaller size tabletop screens and mini projectors, just to see if that would give people more opportunities to tinker with their codes once they see their sprite on the screen. Certainly, having multiple small stations allowed people to tweak their codes in real time at each station, but environmentally it looked less inviting and attractive compared to having a large collaborative projection. We are still very interested in developing an activity and environment that are immersive and hands-on where people can collaboratively create stories and scenes using coding. 

replace this text

This work was supported by a grant from Science Sandbox, an initiative of the Simons Foundation