Autonomous mobile robots can do dangerous, strenuous, or monotonous work. They are already being used in many places, for example, in the production of automobiles, monitoring industrial plants, or inspecting oil platforms.
However, these systems, controlled with the help of artificial intelligence (AI), can be misused for acts of sabotage or espionage or even as a weapon. Therefore, protection against misuse should be considered during development, and research in this area should be expanded. This way, autonomous mobile robots can be used safely and responsibly and unleash their diverse social and economic potential. How robots and other AI systems can be protected
Robotics will find increasing application in many areas of society and the economy in the coming years. The systems can relieve people by eliminating heavy physical or repetitive tasks. Robotics has steadily become cheaper in recent years, while the systems have become more powerful due to technical innovations. In particular, the major advances in machine learning were decisive for this development. Machine vision, in particular, a type of machine learning, offers enormous potential. As a result, machines can, for example, use their sensors to document the progress on construction sites or to detect any damage to the building. In addition, they can measure industrial plants,
However, autonomous systems also offer a target for misuse. For example, cyber attackers can use the sensors of insufficiently protected robots for espionage and surveillance. If criminals gain full control over the autonomous system, they can also be used for sabotage or even as a weapon. Misuse occurs when a system is used contrary to its actual purpose and fundamental values , such as the physical and psychological integrity of people, (democratic) freedoms and rights, privacy or material and immaterial values are violated, or the environment is damaged. It is, therefore, a central challenge to arm the robotic applications against criminal use,
To make autonomous systems resistant to attempts at misuse, the general IT security of the system should always be considered. This applies to both the organizational and the technical dimensions. An overall integrated strategy is important.
In the case of autonomous systems, in particular, the early detection of anomalies and restricting certain system functionalities can be particularly important. These anomalies can be detected with the help of artificial intelligence (AI), for example, by the AI-supported system detecting deviations from the environment model in autonomous driving or the planned course of an operation. Actions of similar AI systems can be collected in a cloud environment, for example. Comparing the individual actions can determine whether these actions have already occurred in similar situations.
In this way, unusual and, therefore, suspicious requests and actions can be identified and checked through simulation in cloud environments to analyze expected outcomes, such as system or environmental changes. The collected data can then be used to optimize anomaly detection by using it for the learning process of AI systems.
Another way to protect autonomous systems against misuse is to limit their autonomy. The functionalities and capabilities of autonomous robots are reduced, for example, to certain locations, time windows, situations, and environments. With geofencing or targeting, for example, drone flights to specific areas are prevented, or construction machines are programmed to only work in the construction site area.
The capabilities of autonomous, mobile robots can also be limited, coupled with an environmental analysis based on the robot’s sensors. This means that the robot’s sensor first detects the environment, and only when certain features of the place and environment are recognized certain capabilities of the device are released. These environment and location analyses represent a good alternative or supplement since procedures such as geofencing and targeting can sometimes be technically circumvented. The basis for this is comparing information from different sensors in a mobile AI system.
ALSO READ: Artificial Intelligence: Retail & Marketing Issues
A scenario in which anomaly detection and the restriction of functions can prevent misuse could take place on a construction site shortly. An autonomous excavator is used there, which performs tasks independently. A cybercriminal terrorist organization manages to identify a security gap in the system and thus gain access to the autonomous system. Your goal is to steer the excavator onto an adjacent highway.
The vehicle rolls towards the highway and approaches the end of the demarcated construction site. A certified system, isolated from the main system, immediately determines that the “approved” environment is being left and that there is an anomaly from the normal behavior of the vehicle. The system deactivates the excavator at the boundary of the construction site and brings it to a standstill. At the same time, it sends a request to the headquarters, where the systems are checked, and offers them the option of taking control of the vehicle manually.
In another scenario, the largest European port could be attacked. In the scenario, the port of Rotterdam is monitored by AI-based underwater drones, which constantly check the water depths and bottom conditions. A militant group intercepts a drone and attempts to plant a time-coded explosive charge.
However, the drone is protected by a (self-)monitoring system that analyzes the data from acceleration sensors and acoustic and optical sensors on the drone and, in this way, detects anomalies – for example, if the speed changes unusually while the drone is being intercepted. This allows early detection if a drone would be intercepted or manipulated. Thus, the drone is out of circulation in good time, and the explosive charge defuses.
There are numerous options to protect robotic systems from illegal use. Of particular importance are the automated anomaly detection and the limitation of the functionalities of the respective devices, for example, to certain locations, environments, or periods. Protection against misuse must be considered during the development of autonomous robots. In the further development of these systems, possible abuse targets and suitable countermeasures should be derived, which can be implemented as prevention or in an emergency. Last but not least, research on protection against misuse should be further promoted. The focus should be on people’s trust in autonomous robotics.
In the empire of social media influence and personal branding, the quest for a substantial…
Qureka Banner is a dynamic and eye-catching form of advertisement used in digital marketing to…
For tablets and Android phones, get the free download of TechNukti Com. Get the Top…
ZYN pouches are the most widely used tobacco leaf-free nicotine pouches in the United States;…
The website Kheloindian.online game how to play Offers young people in India with a preference…
Through the Chandigarh University Information Management System (CUIMS) or CUIMS Login, an online portal, Chandigarh…