Okay, this is such a long shot it is not happening
AI character rendering engine.
A locally hosted decision matrix that interfaces with neural net server, that can produce character models based on description flags, and a gradient training algorithm with human input. Each TiTs executable would read the character attributes, and pull a 'fresh' decision framework from the server, to use user compute time to render a model/image. This, once the user has done some yes/no or gradient input, it is sent back to the server as additional training input, in order to produce better future decision frameworks. Eventually, conceivably, possibly, this could maybe work half way to horrible.
Yeah, this isn't happening. Unless someone here with at least one Ph.D in computer science wants to tackle it.