London Underground public transportation users are now monitored by monitoring software based on artificial intelligence (AI). This AI utilizes video from monitoring cameras which is accessed in real time, and analyzed to estimate the possibility of a crime occurring from the movements of people recorded in the video. This machine learning software detects aggressive behavior and also “looks” for knives or guns pointed at other people. Apart from that, the software can also detect if someone has fallen onto the rails or someone has cheated the mass transportation ticket system.
The way it works is that the AI system will analyze the image every 1/10. If it detects one of 11 behaviors deemed problematic, the system will send a rating to an iPad or computer held by the station attendant. From October 2022 to the end of September 2023, Transport for London (TfL), which operates the underground train system and bus network in London, England, tested 11 algorithms to monitor people’s movements at Willesden Green Tube station.
This is the first time this test has been carried out, namely a combination of AI and video recording, to create a warning system which will then be sent to security officers. During testing, more than 44 thousand warnings were recorded, and 19 thousand of them were sent to station employees in real time. The remaining 25 thousand alerts are stored for analytical purposes.
TfL uses computer vision algorithms to monitor the behavior of train users while they are at the station. They also said that this system would be applied to various other stations in London starting last December.
Algorithms to Monitor System AI
During the trial period at Willesden Green, a station that serves around 25 thousand passengers every day, the AI system was used to detect potential security incidents, and could also target criminals and people with antisocial behavior. The AI can detect train users who use wheelchairs, smoke e-cigarettes, enter restricted areas, or endanger themselves by standing too close to the train.
The current system is not perfect and often makes mistakes. For example, marking a child who is walking with his parents when passing through the ticket gate slot gacor at the station as someone who wants to cheat the ticket system.
This AI system also cannot differentiate folding bicycles from ordinary bicycles. To train this AI system, the police were also involved in pointing sharp weapons and pistols so that they were recorded by the camera and could be recognized by the system.
So far the system does not involve facial recognition technology, but simply identifies people’s movements, analyzes body language, and the like. However, this is also considered to still raise ethical, legal and privacy questions.
“Even though this experiment did not involve facial recognition, the use of AI in public places to detect behavior and analyze body language still raises ethical, legal and other questions such as facial recognition technology,” said Michael Birtwistle, a researcher at the Ada Lovelace Institute.
Gates asks what Hannah wanted to ask the time traveler. “What percentage of the world’s population will be able to live on up to USD 20 per day by 2100? The answer will reveal quite a bit about future poverty levels, and whether we have made progress on health, agriculture and poverty,” said Ritchie.
Currently, around 9% of the world’s population or more than 700 million people live on an income of less than USD 2.15 per day, which according to the World Bank is extreme poverty. If most people live on almost USD 20 per day by 2100, especially in low-income countries, that is a remarkable achievement and a sign that humanity is likely to make progress in mitigating climate change.
“My assumption is that climate change does not have very bad impacts, agriculture is destroyed and health is very poor, and people are pushed into poverty,” Ritchie said. Initially, Gates said he preferred to ask about energy production and artificial intelligence. “How do you generate energy? And then how does AI help them come together or how do they overcome those challenges,” he said.
Gates does not agree that AI will cause disaster because according to him, this technology can ultimately help the world solve global challenges in various fields such as health and education. He still serves as an advisor to Microsoft, which has invested billions of dollars in AI research startup OpenAI, after leaving its board of directors in 2020.