Meta open-sources multisensory AI model that combines six types of data

Illustration by Alex Castro / The Verge

Meta has announced a new open-source AI model that links together multiple streams of data, including text, audio, visual data, temperature, and movement readings.

The model is only a research project at this point, with no immediate consumer or practical applications, but it points to a future of generative AI systems that can create immersive, multisensory experiences and shows that Meta continues to share AI research at a time when rivals like OpenAI and Google have become increasingly secretive.

The core concept of the research is linking together multiple types of data into a single multidimensional index (or “embedding space,” to use AI parlance). This idea may seem a little abstract, but it’s this same concept that underpins the…

Continue reading…






Leave a Reply

Your email address will not be published. Required fields are marked *

Generated by Feedzy