SOFTWARE

OBJECTIVES
The software manages the integrated operation of the AUV's onboard systems, enabling navigation control, sensor reading, actuation of actuators and communication with the surface base. It coordinates decisions in real time, ensuring operational stability and adaptation to underwater environment conditions.
CONTRIBUTIONS
The Software area contributes to the autonomy of the AUV through algorithms such as SLAM, which enable real-time navigation and mapping. It develops control systems, computer vision and artificial intelligence for on-board decision-making. It also performs simulations and integrates on-board systems, ensuring efficient communication between sensors, actuators and processors.
CHALLENGES
The main challenges of AUV software involve ensuring the autonomy and reliability of systems in a hostile underwater environment, without access to GPS and with limited communication. It is necessary to process sensory data and make decisions in real time, even in the face of noise and uncertainty. In addition, the integration between sensors and actuators needs to be efficient, considering the processing, memory and energy constraints of embedded systems.

CONTROL

The AUV control system is responsible for maintaining stability, adjusting orientation, and executing precisely defined trajectories. It is based on dynamic models of the vehicle. It is verified through simulations in MATLAB, where we perform fine-tuning of parameters to ensure robustness and performance in non-linear situations.

SIMULATION

Simulations are performed in a dedicated virtual environment, developed in Unity with integration with ROS, enabling real-time testing with simulated sensors, virtual actuators and embedded logic identical to that used in the physical robot. This allows for evaluation of the AUV's behavior when facing obstacles, testing of complete missions and debugging of ROS nodes, accelerating the development cycle and avoiding critical failures in the field.

COMPUTER VISION

Computer vision is divided into two fronts. The first uses trained neural networks to identify markers, obstacles and specific objects in the competition, enabling navigation based on objectives. The second performs mapping of the environment through stereo vision, generating point clouds in real time and contributing to the construction of 3D maps used in SLAM. Both operate onboard and in real time, respecting the processing limitations of the system.

TEAM

DSCN4439

Wesley de Abreu

Area Leader

Focus on computer vision

DSCN4432

Rian Rocha

Focus on control and architecture.

DSCN4442

Pedro Henrique

Focus on SLAM and mapping.

DSCN4416

Giovanni Nogueira

Focus on embedded systems and record organization.

DSCN4413

Daniel Alves

Focus on mapping and actuators action.