(ORDO NEWS) — A digital twin is a copy of a person, product or process created using data. It may sound like science fiction, but some argue that you will probably have a digital twin within the next decade.
As a replica of a person, a digital twin will – ideally – make the same decisions that you would earn if you were presented with the same materials.
This may seem like another speculative futurist claim. But this is much more likely than people would like to believe.
While we tend to assume that we are special and unique, given enough information, artificial intelligence (AI) can make many inferences about our personalities, social behaviors, and purchasing decisions.
The era of big data means that vast amounts of information (called “data lakes”) are being collected about your explicit attitudes and preferences, as well as the behavioral footprints you leave behind.
Equally disturbing is the extent to which organizations collect our data. In 2019, the Walt Disney Company acquired Hulu, a company that journalists and activists have pointed out has a dubious record for collecting data.
Seemingly innocuous phone apps, like those used to order coffee, can collect data. a huge number of messages from users every few minutes.
The Cambridge Analytica scandal illustrates these concerns, as users and regulators are concerned about the prospect of someone being able to detect, predict and change their behavior.
But how concerned should we be?
High and low fidelity
In simulation studies, accuracy is determined by how closely a replica or model matches its target. Simulator fidelity refers to the degree of realism of the simulation in relation to real references.
For example, a racing video game provides an image that speeds up and down as we press keys on a keyboard or controller.
While a driving simulator may have a windshield, chassis, shifter and gas and brake pedals, a video game has a lower degree of accuracy than a driving simulator.
The digital twin requires a high degree of accuracy, which could include information from the real world in real time. Now it is raining outside, it will rain in the simulator.
Digital twins in industry can have drastic consequences. If we can model the system of human-machine interaction, we will be able to allocate resources, anticipate shortages and breakdowns, and make predictions.
The digital twin of a person will include a huge amount of data about a person’s preferences, biases, and behavior, as well as be able to obtain information about the user’s immediate physical and social environment in order to make predictions.
These requirements mean that a true digital twin is a distant possibility in the near future. The number of sensors required to accumulate the data and process performance needed to support the virtual user model will be enormous. Developers are currently settling for a model with low accuracy.
The creation of a digital twin raises social and ethical issues regarding the integrity of data, the accuracy of model predictions, and observational capabilities. required to create and update the Digital Twin, as well as ownership and access to the Digital Twin.
British Prime Minister Benjamin Disraeli is often quoted as saying, “There are three kinds of lies: lies, damned lies and statistics”, implying that numbers cannot be trusted.
The data collected about us is based on the collection and analysis of statistics about our behavior and habits in order to make predictions about how we will behave in certain situations.
This view reflects a misunderstanding of how statisticians collect and interpret data, but is a source of serious concern.
One of the most important ethical issues associated with the digital twin has to do with the quantitative fallacy, which suggests that numbers have a divorce purpose taken out of their context.
When we look at numbers, we often forget that they have a special meaning that comes from the measurement tools used to collect them. And a measurement tool may work in one context but not in another.
When collecting and using data, we must be aware that the sample includes certain features and not others. Often this choice is made for reasons of convenience or because of the practical limitations of the technology.
We must be critical of any claims based on data and artificial intelligence, because design solutions are not available to us. We need to understand how data is collected, processed, used and presented.
Power imbalances are fueling a growing public debate about data, privacy and surveillance. .
On a smaller scale, this could cause or widen the digital divide – the gap between those who have access to digital technologies and those who do not. On a large scale, it threatens a new colonialism based on access to and control over information and technology.
Even the creation of digital twins with a low level of fidelity makes it possible to control users, draw conclusions about their behavior, try to influence them and represent them to others.
While this may help in the healthcare or education sector, failure to give users the ability to access and evaluate their data could jeopardize individual autonomy and the collective good of society.
Data subjects do not have access to the same resources as large corporations and governments. Lack of time, training and possibly motivation. Consistent and independent oversight is needed to ensure that our digital rights are protected.
Contact us: [email protected]