Skip to main content
Enterprise AI Analysis: Multi-Neural Network Localisation System with Regression and Classification on Football Autonomous Robots

Robotics & Autonomous Systems

Multi-Neural Network Localisation System with Regression and Classification on Football Autonomous Robots

In environments like the RoboCup Middle Size League (MSL), precise and rapid localisation of robots is crucial for effective autonomous interaction. This study addresses the limitations of conventional localisation approaches—often based on single-camera systems or sensors such as LiDAR (Light Detection and Ranging) and infrared—by developing a robust Artificial Intelligence (AI)-based multi-camera system solution. This method uses multiple neural networks, breaking down the problem while taking advantage of both classification and regression methods. The solution includes a classification neural network to detect field markers, such as line intersections, and two regression neural networks: one for calculating the position of the markers, and another for determining the robot's position in real-time. It takes advantage of both approaches while maintaining the desired performance, accuracy, and robustness, simplifying the training process and adapting it to different scenarios. Designed specifically to meet MSL robotics's high-speed demands and precision requirements, the system employs data augmentation techniques to ensure resilience against lighting, angles, and position variations. The results show that this optimised approach improves spatial awareness and accuracy, promising robot football advancements. Beyond MSL applications, this method has the potential for broader real-world uses that require dependable, real-time localisation in dynamic settings.

Executive Impact Summary

This multi-neural network approach delivers high-precision, real-time localisation critical for high-speed autonomous systems. By combining classification for marker detection and regression for accurate positioning, it ensures robust performance in dynamic environments, with proven success in competitive robotics.

0 Avg. Localisation Error
0 ROI Potential
0 Real-time Performance
0 Deployment Speed

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Introduction to Autonomous Robotics Localisation

RoboCup, initiated in 1997, is a global challenge aiming to advance robotics and AI, with a long-term goal for robots to play football against humans by 2050. This competition drives innovation in areas like AI, Computer Vision, Control, Localisation, and Multi-Agent Cooperation. The Middle Size League (MSL) involves robots weighing up to 40 kg, operating without external sensors, making on-board localisation a critical, high-speed task due to robot movement up to 8 m/s.

This paper presents an AI-based multi-camera system using multiple neural networks for precise and rapid robot localisation. It leverages classification for marker detection and regression for accurate position determination, offering robustness against lighting and angle variations.

State of the Art in Intelligent Robot Localisation

Localisation is fundamental for intelligent football robots. Traditional methods often rely on single cameras or sensors like LiDAR and IMUs, but these can be limited by dynamic environments and lighting. Neural networks provide a robust alternative by learning visual features to map position and orientation.

Key MSL localisation methods include:

  • Triangulation Approach: Uses coloured goals/posts as landmarks.
  • Geometry Localisation: Employs Hough transforms for field lines and markers.
  • Monte Carlo Localisation (MCL): Bayesian filtering with particles for position estimation.
  • Matching Optimisation: Aligns visual features to field markers for accuracy.
  • Template Matching: Transforms images to top-down view for matching field spots.

The field is evolving from traditional vision-based approaches to advanced neural network applications, addressing strategic challenges and technological demands in MSL. Multi-camera systems like the one proposed offer enhanced spatial awareness and resolution compared to monocular systems, although they present processing challenges.

Comparison of Localisation Methods

Method Advantages Disadvantages
Monocular Vision
  • Lower cost and complexity.
  • Limited depth perception.
  • Sensitive to lighting variations.
Multi-Camera Systems
  • Broader field of view.
  • Reduced occlusions.
  • Greater contextual data.
  • Increased processing requirements.
  • Potential redundancy or complexity.
Odometry
  • Simple to implement.
  • Does not require external infrastructure.
  • Accumulation of errors over time.
  • Wheel slippage or sensor inaccuracies.
Neural Network-Based
  • Highly adaptable.
  • Capable of handling complex, dynamic environments.
  • Scalable for various tasks.
  • Dependent on quality of training data.
  • Requires significant computational resources.

Robot Hardware & Vision System

The robots utilise an OmniDirectional Vision System positioned at the highest permissible point to maximise field of view. This system integrates three cameras, each phased by 120° and tilted downwards at 30°, providing a 360° view without blind spots. Each camera (OV2710 sensor) operates at 640 × 480 pixels with a 125° FOV at 120 fps, ensuring high frame rates. An Adafruit BNO085 digital compass provides yaw values for orientation, complementing the visual data.

Developed Solution Architecture

The proposed system leverages neural networks to interpret visual data and precisely determine robot positions. It's particularly effective in dynamic environments like football. The solution consists of four main components:

Enterprise Process Flow

Image Acquisition via Cameras
Marker Detection Neural Network (Classification)
Coordinate System Translation Neural Network (Regression)
Integration with Orientation Sensors
Localisation Neural Network (Regression)

The system detects FIFA and MSL-regulated field markings (e.g., corner marks, penalty spots, center circle, 'L', 'T', '+' shapes, goalposts) without requiring field modifications. Initial development used Webots R2023a simulation for data generation, followed by manual annotation and Roboflow data augmentation (flip, rotation, crop, saturation, brightness, exposure, blur, noise) to create a robust dataset of simulated and real images.

Neural Network Training & Results

The Marker Detection Neural Network uses YOLOv10s, chosen for its efficiency and accuracy in detecting seven unique marker types. Training involved 1500 epochs with automated batch sizing and 640 × 640 pixel input images. The model achieved a mean Average Precision (mAP) of 0.896 at an IoU threshold of 0.5, demonstrating high detection accuracy despite white markers on a green background. Most markers showed true positive rates above 85%.

For Camera-World Coordinates Transformation, a dedicated regression neural network maps screen coordinates (u, v) to real-world angle and distance. This model, trained on 545 data points, eliminates the need for explicit formulas and adapts to lens variations. To ensure real-time performance, pre-calculated coordinates are integrated into a lookup table, requiring 14.74 MB of memory for 640x480 resolution.

The Localisation Neural Network is a feedforward regression network with 31 inputs (angle and distance from detected markers, plus compass orientation). It was trained on 40,000 data points across the football field, with noise (±10 degrees for angle, ±20 cm for distance) and marker normalisation (0-1 range) to enhance resilience. Closer markers were prioritised. The X-axis network achieved an average error of 3.82 cm, with 80.78% accuracy within 20 cm. The Y-axis network achieved an average error of 1.61 cm, with 91.55% accuracy within 20 cm.

0 Average Localisation Error Across X and Y Axes, Demonstrating Sub-Ball Size Precision.

Limitations & Advantages

The system has a detection limit of 6 meters for most markers (10 meters for goalposts) due to camera resolution (640 × 480 pixels). This can compromise localisation in distant field zones. Dynamic game conditions, where markers might be obscured by other robots or objects, also pose a challenge to the system's robustness.

A significant advantage is its independence from colour transitions, making it resilient to lighting variations. The auto-calibration of cameras and neural network adaptability further enhance robust marker detection. Data augmentation during training improves generalisation across diverse environmental settings.

Discussion and Future Directions

Marker detection accuracy significantly impacts overall localisation performance; fewer detected markers can lead to increased errors. The system addresses data redundancy from overlapping camera views by processing each marker detection once. Prioritising closer markers enhances learning and robust predictions, especially with varying marker densities.

Simulation with Webots and data augmentation with Roboflow proved invaluable for creating diverse, realistic datasets. While robust in typical scenarios (errors within 90 cm less than 0.43% of occurrences), extreme conditions with limited or obstructed markers remain a challenge. The 50 cm robot diameter makes observed errors negligible in most gameplay.

Sensor fusion, integrating locomotion encoders and compass data, enhances stability by mitigating positional jumps. A Kalman filter refines these estimates, ensuring smooth and consistent localisation. The system processes both classification and regression networks in an average of 38 ms (26.3 Hz), slightly faster than the communication system (25 Hz), with potential for further optimisation.

Real-World Validation: 2024 RoboCup MSL Scientific Challenge

This innovative multi-neural network localisation system was rigorously validated in the 2024 RoboCup Middle Size League (MSL) Scientific Challenge, where it achieved a first-place award. Its capability to address real-world localisation challenges, combined with its high precision and resolution (average error of 3.82 cm for X-axis and 1.61 cm for Y-axis), proved the system's reliability and accuracy in a highly dynamic competitive environment.

The success underscores the practical applicability of combining classification and regression neural networks for robust, real-time spatial awareness in autonomous robotics. This achievement not only demonstrates a significant step forward for robot football but also highlights the potential for this adaptable solution in broader real-world applications requiring dependable, precise localisation.

Conclusions

This study successfully introduced a novel multi-camera localisation system for robotic football, integrating classification and regression neural networks. The classification network detects markers, which are then processed by two regression networks to compute precise (x, y) coordinates. This architecture effectively bridges traditional vision-based methods with modern AI, offering adaptability to complex spatial relationships and dynamic environments.

The system demonstrated robust, scalable, and efficient real-time performance through computational optimisations like pre-computed transformations. Its success is attributed to a diverse, high-quality marker dataset, despite visibility remaining a critical factor. Validated in the RoboCup MSL, the system achieved acceptable error margins, earning a first-place award for its innovation, efficiency, and adaptability.

Calculate Your Potential AI Impact

Estimate the efficiency gains and cost savings your enterprise could achieve by integrating advanced AI solutions like the one researched.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

A typical phased approach for integrating advanced localisation and autonomous systems into your operations, ensuring a smooth transition and measurable impact.

Phase 01: Discovery & Strategy

Comprehensive assessment of existing systems, infrastructure, and operational workflows. Define clear objectives, KPIs, and a strategic roadmap for AI integration, including data readiness and system requirements.

Phase 02: Prototype & Data Engineering

Develop a proof-of-concept for the multi-neural network localisation. Focus on data acquisition, annotation, and augmentation strategies tailored to your specific environment and operational needs.

Phase 03: Model Training & Integration

Train and refine marker detection, coordinate translation, and localisation neural networks. Integrate with existing robotic platforms, leveraging sensor fusion and real-time processing capabilities.

Phase 04: Testing & Optimization

Rigorous testing in simulated and real-world environments. Fine-tune model parameters, optimise for speed and accuracy, and conduct user acceptance testing with your operational teams.

Phase 05: Deployment & Scaling

Full-scale deployment of the autonomous localisation system. Establish monitoring protocols, provide comprehensive training, and plan for iterative improvements and scaling across additional assets or locations.

Ready to Transform Your Operations with AI?

Connect with our AI specialists to explore how high-precision localisation and autonomous systems can drive efficiency and innovation in your enterprise.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking