I want to make user-aware and context-changing equipment. Well, at least I just got this idea of it’d be cool if…
Think about this –
A bunch of doctors, nurses and support staff are in an operating room working. Wouldn’t it be cool if the equipment could see who was looking at it and change it’s context (display) accordingly. More specifically, the equipment would know who was looking at it via face, color or other pattern recognition.
Specialized equipment could be cross-functionally programmed so if someone needed “another person’s information” the display could make that information easier to see while remaining unchanged for the original user/primary consumer.
Well, I think it’d be cool, at least.