Exploration for my interaction design class, a little app for creating shared albums. Somehow colors got a bit distorted in the video.
Lighting experiment, tinkering with programmatically generating shadows / highlights to match the direction and strength of your real-world lighting environment (using a webcam).
Put another way, user interface elements emulating real-world materials (extruded buttons, different materials, etc) could potentially react just like their real world equivalents based on the lighting around you.
I first need to play with things a little more to improve latency, but I’m really interested in incorporating reflections in [simulated] semi-transparent / glossy materials.
One of my favorite features of recent smartphones/wearables is the ability to use voice commands by simply addressing the device. By forcing commands to begin with a phrase like “Ok Google” or “Ok Glass”, we’ve made it much easier for these devices to distinguish between ordinary conversations and commands.
I was thinking about how this might be extended to larger objects in shared/public spaces (traditional desktop computers, kiosks, wall displays, etc), where (1) we probably can’t train on a particular individual’s voice, and (2) requiring a nonobvious phrase is undesirable.
To test things out, I threw together this prototype that listens for commands if and only if the system has a person’s attention, defined by head tracking and their proximity. This way I can just look at my computer to give it commands, but ordinary conversation with my roommate is correctly ignored.
Spent the last few days hanging out with the Dropbox design team at SXSW. Had an awesome time meeting everyone, and really enjoyed catching up with some familiar faces.
Thanks Austin, see you next year!
Columnist now uses Readability. No more annoying ads or odd page formatting to get in the way of reading.
I also added a blacklist to hide list articles / buzzfeed and the like.