(ORDO NEWS) — The event took place, as it should be in the metaverses, online. Despite the fact that it was simply physically impossible to listen to all the reports, the conference turned out to be the most important event of the outgoing year. Popular Mechanics decided to retell the most interesting speeches to readers.
The concept of the metaverse was around long before Mark Zuckerberg renamed Facebook to Meta. It was first described in 1984 by Neil Stevenson in Avalanche, a cyberpunk classic.
The Metaverse is a virtual space that is equivalent to the real world and is connected with it by a single economy, as well as the opportunity to work, have fun, communicate and spend time there.
Now they are talking about the metaverses literally in unison, because in the near future they will change our world more than before the Internet and mobile phones. And if fifteen years ago the most scarce specialists were website developers, then next year they will be the builders of virtual worlds.
We are already there
The permanent organizer of the CG Event and the main evangelist of the metaverses in our country Sergey Trypsin is sure that we have been living in the metaverse for a long time. We spend a lot of time on the internet and are already heavily digitized.
The phone counts how many steps we have taken, records our route, tracks contacts, purchases, photos and videos; the fitness bracelet knows everything about calories, emotions, pulse, sleep, stress, alpha rhythms. We are caught in the lenses of thousands of video cameras, our documents are uploaded to hundreds of databases.
And all this data is continuously analyzed by many different systems with artificial intelligence. In fact, we already live in a digital world. It remains to pull a digital shell on this information – and we will get that same metaverse.
After renaming Facebook to Meta, the whole world learned about the metaverses.
As befits an evangelist, Sergey Trypsin believes in a single metauniverse: there cannot be many of them, just as there cannot be many Internets. Relatively speaking, it is now early antiquity with a bunch of different gods: here, not only every IT giant, but also medium-sized companies are trying to create their own virtual world.
Even the Yakut studio MyTona is launching its own metaverse. “The metaverse is one and unique ,” Trypsin preaches, “ the world is one, and the universe is one”. What is actually true: to create a metaverse in which we could live not only in the form of data and numbers, enormous resources are required, which neither individual companies nor even entire states have.
It is believed that if the leading technological giants joined forces, they would be able to launch the metaverse in five years. For example, Amazon can provide its servers, Neuralink can develop neural interfaces, Facebook can provide augmented reality glasses and peripherals, Nvidia can provide hardware and blockchains, and Unity and Epic Games can share 3D engines.
There are a lot of problems on the way to creating a metaverse, and their solution will load millions of people with high-paying jobs.
Synthetic people
The founder of the Malivar company, Valery Sharipov, the head of the country for avatars, told how synthetic people are made today. We live in a world of social networks, informational fast food, where every person with a phone is their own media.
The time is coming for synthetic media, when some of the content is generated by algorithms and neural networks. In the era of metauniverses, it is difficult to place a real person in this communication channel: he simply does not have time.
Making a three-dimensional model of a living person is also not an option. On the one hand, it is wildly expensive, and only film studios can afford it. On the other hand, human appearance cannot be sold, it is inalienable and always belongs to its owner.
The solution is to generate people that never existed. Artificially created avatars have a unique appearance, the rights to which can be transferred. Technology allows you to give the character a unique voice, manner of movement, facial expressions.
And a synthetic person can be promoted: for example, in Asian countries, a culture of virtual idols has already developed – singers, YouTubers, sellers of goods. If such a character is well pumped and has an impressive fan base, its price can skyrocket. It is ideal in particular for video commerce, which is very popular in China.
You can come up with images for each specific market, and if you add existing automatic translation systems to them, then such a character will be able to sell goods around the world. And he himself can be easily sold to the entire audience.
Metaverse Entertainment creates meta idols to sell products in Asian markets.
However, the cost of creating such a character using classic 3D technologies is very high. Now Malivar uses an inexpensive method, ideal for social networks: they take the face of a synthetic character and “stretch” it onto a live model.
But technology does not stand still – neural systems have already appeared that are capable of reproducing facial expressions and lip movements by voice; for sure they will soon learn how to realistically animate the characters.
However, the very concept of synthetic people will not disappear. It is normal for the new generation to communicate with avatars and bots, this is their world. True, young people do not have money – but they have computing power. And they can donate them so that the character can create their own worlds.
The new ADIDAS collection will consist of digital and physical items and will be sold as NFTs.
In the beginning was the word
Another interesting problem in the metaverse is voice recognition. The image not only does not communicate what a real person feels, but often even distorts perception: people are able to mask their state with facial expressions and verbal communication. But the voice perfectly conveys emotions – the interlocutors feel them well, even when talking on the phone.
In order for people to understand each other, in future metaverses it is necessary to reduce voice distortion to a minimum. After all, we will enter the virtual world not from studios, but from ordinary apartments, offices or from noisy streets.
Neural networks already know how to isolate the voice, and the Descript program for bloggers, using the Studio Sound plugin, allows you to refuse to record in studios – it recognizes the voice, then removes any noise around and finally completes the missing harmonics.
Moreover, when two people speak from the same point, we do not understand them. People are able to recognize more than two voices at the same time only when the sound sources are located in different places. Even the separation of interlocutors on the right and left channels helps to understand them. But what if there are fifteen speakers? Stereo sound won’t help.
High Fidelity has a spatial audio solution where sources spread and voices become clear. But it’s in the movies. And in the virtual world, such an operation should be performed automatically. And it already works in some games.
Digitize the planet
To create a metaverse, we have to digitize the whole world. Thirty years ago, as a student, I “three-dimensional” furniture – and the usual chair took me all day. Modern neural systems “three-dimensionally” the entire planet around the clock – for example, the CityNeRF company does this using satellite images.
Details that are not visible from space can be easily added using frames from social networks: people have already visited all the nooks and crannies of our planet and photographed everything from any angle.
Lidars, which are now built into iPads and iPhones and will soon be built everywhere, work incredible miracles – they successfully replace professional scanners that cost tens of thousands of dollars. It is quite possible that special booths will appear on the streets: I went there for a second, paid some money – and got the perfect personal avatar.
But of course, there will be work for high-class craftsmen. They will hand-craft very expensive artifacts, clothing, furniture, jewelry, and sell them as NFTs in specialized boutiques – no wonder luxury goods manufacturers are already booking seats in the metaverses.
For other goods, markets and exchanges of ready-made characters, mass clothing, houses, streets, and even entire worlds will arise – the metaverse will be just not enough. 3D-modelers will live like musicians – once they successfully modeled, then you sell all your life.
But they will have to compete with AI – generative systems that create models in the same way that they now write tracks for supermarkets. I saw generated infinite worlds, as if they came out of psychedelic trips – it is already almost impossible to compete with them now.
Crypto startup JADU has made millions of dollars selling pixelated jetpacks and hoverboards in NFTs.
Other
At one time, the appearance of gramophone records changed music forever: songs that sounded longer than five minutes, the playback time of the first discs, disappeared. Today, social networks determine the addictions of young people: the tastes of the new generation are shaped not by films, but by short videos from TikTok. The Metaverses will affect us even more radically.
Already, neural networks analyze x-rays better than living radiologists. Soon we will start wearing clothes with built-in medical sensors, and the metaverse, using a huge database of accumulated data, will diagnose us, recommend diets and force us to exercise.
There is also bad news. Neural networks will be able to replace not only doctors. They will be able to do almost everything better than us: trade on a crypto exchange, write texts, digitize worlds, and even have sex. And somehow we will have to learn to live with them in a common metaverse. Although, according to Sergei Trypsin, we already live in it.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.