Lighting experiment, tinkering with programmatically generating shadows / highlights to match the direction and strength of your real-world lighting environment (using a webcam).

Put another way, user interface elements emulating real-world materials (extruded buttons, different materials, etc) could potentially react just like their real world equivalents based on the lighting around you.

I first need to play with things a little more to improve latency, but I’m really interested in incorporating reflections in [simulated] semi-transparent / glossy materials.

One of my favorite features of recent smartphones/wearables is the ability to use voice commands by simply addressing the device. By forcing commands to begin with a phrase like “Ok Google” or “Ok Glass”, we’ve made it much easier for these devices to distinguish between ordinary conversations and commands.

I was thinking about how this might be extended to larger objects in shared/public spaces (traditional desktop computers, kiosks, wall displays, etc), where (1) we probably can’t train on a particular individual’s voice, and (2) requiring a nonobvious phrase is undesirable.

To test things out, I threw together this prototype that listens for commands if and only if the system has a person’s attention, defined by head tracking and their proximity. This way I can just look at my computer to give it commands, but ordinary conversation with my roommate is correctly ignored.

Out of curiosity I built a little prototype of a draggable Framer View that - given a set of states (position/scale/rotation/etc) - “snaps” to the closest state and tweens correctly based on a spring.

A nice example of where you might use something like this is Facebook’s Paper app, where draggable elements are used to adjust state.


Check it out for yourself on github at , or see another demo I did this night