Musing about the DALL-E neural network (https://openai.com/blog/dall-e/) and how we’ve grown from the text-to-text of GPT models to text-to-image, and I started to think about how this trend will continue into other sorts of X-to-Y transformations as well.
Not too long from now, we will have voice-to-Y transformers, and as brain-computer interfaces develop we will get thought-to-Y transformers.
On the other end, we will soon have X-to-model transformers to make 3D images and schematics, X-to-video transformers to make movies, and X-to-interface transformers to create things like websites and video games. As 3D printing develops we will have X-to-object and possibly even X-to-organism.
Imagine the house of your dreams – it’s okay that you don’t provide all of the design or technical details (it will fill in the gaps for you). Not too long after, your imagination is a reality. That is the direction we are heading. How amazing that is – and terrifying!
submitted by /u/AlgaeRhythmic
[link] [comments]