The invention relates to a method of converting a set of words into a three-dimensional scene description, which may then be rendered into three-dimensional images. The invention may generate arbitrary scenes in response to a substantially unlimited range of input words. Scenes may be generated by combining objects, poses, facial expressions, environments, etc., so that they represent the input set of words. Poses may have generic elements so that referenced objects may be replaced by those mentioned in the input set of words. Likewise, a character may be dressed according to its role in the set of words. Various constraints for object positioning may be declared. The environment, including but not limited to place, time of day, and time of year, may be inferred from the input set of words.

 
Web www.patentalert.com

< Multi-modal navigation in a graphical user interface computing system

> Index layout measurement method, position and orientation estimation method, index layout measurement apparatus, and position and orientation estimation apparatus

> Network identity and timezone (NITZ) functionality for non-3GPP devices

~ 00580