An-Najah National University
Faculty of Engineering & Information Technology
Presented in partial fulfillment of the requirements
for a bachelor’s degree in computer engineering
RoboMealMate: Innovating Restaurant Service Hospitality with AI
(Artificial Intelligence)
22 January 2025
Students:
Osama Mansour
Ahmad Rasheed
Supervisors:
Dr.Bahaa Shaqour
Dr.Muhaned Al-jabi
2 | P a g e
Acknowledgment
We would like to express our sincere gratitude to Dr. Bahaa Shaqour and Dr. Muhaned Al-
jabi, our supervisors, for their invaluable guidance, continuous support, and insightful
feedback throughout the development of this project. Their expertise and encouragement
have been instrumental in the successful completion of RoboMealMate.
We extend our appreciation to An-Najah National University, Faculty of Engineering &
Information Technology, for providing us with the resources and knowledge that made this
project possible.
A special thank you to our families and friends for their unwavering support, patience, and
motivation during this journey.
Lastly, we acknowledge all the researchers and developers whose work in robotics, artificial
intelligence, and automation has inspired and contributed to the realization of our project.
Disclaimer
DISCLAIMER
This report was written by student(s) at the Engineering Department, Faculty of Engineering,
An-Najah National University. It has not been altered or corrected, other than editorial
corrections, as a result of assessment and it may contain language as well as content errors. The
views expressed in it together with any outcomes and recommendations are solely those of the
student(s). An-Najah National University accepts no responsibility or liability for the
consequences of this report being used for a purpose other than the purpose for which it was
commissioned.
3 | P a g e
Contents
Disclaimer ............................................................................................................................................... 2
List of Figures ......................................................................................................................................... 4
List of tables ............................................................................................................................................ 6
Abstract ................................................................................................................................................... 7
1 Introduction ..................................................................................................................................... 8
1.1 Background: .......................................................................................................................... 8
1.2 Objectives: ............................................................................................................................. 8
1.3 Importance and Significance: ................................................................................................ 8
1.4 Report Overview: .................................................................................................................. 9
2 Theoretical Background and Previous Work ................................................................................ 10
2.1 Robotic Service Systems in Hospitality .............................................................................. 10
2.2 Key Technologies in RoboMealMate .................................................................................. 10
2.2.1 Artificial Intelligence (AI) in Robotics ........................................................................... 10
2.2.2 Natural Language Processing (NLP) ............................................................................... 10
2.2.3 SLAM(Simultaneous Localization and Mapping) .......................................................... 11
2.2.4 Autonomous Navigation and Obstacle Avoidance .......................................................... 11
2.3 Previous Work in Service Robotics ..................................................................................... 11
2.4 Challenges and Opportunities .............................................................................................. 11
3 Methodology ................................................................................................................................. 13
3.1 Design: ................................................................................................................................. 13
3.1.1 physical design: ............................................................................................................... 13
3.1.2 Hardware design: ............................................................................................................ 15
3.1.3 Software design: .............................................................................................................. 20
3.2 Simulation: .......................................................................................................................... 21
3.2.1 URDF (Unified Robot Description Format): .................................................................. 21
3.2.2 Gazebo simulation: .......................................................................................................... 30
3.2.3 Gazebo controller (drive simulated robomealmate): ....................................................... 35
3.2.4 SLAM (Simultaneous Localization and Mapping): ........................................................ 39
3.2.5 Navigation ....................................................................................................................... 48
3.3 Hardware assembly.............................................................................................................. 56
3.3.1 Low-level controller assembly ........................................................................................ 56
3.3.2 High-level controller connections ................................................................................... 63
3.3.3 Power supply and buttons ............................................................................................... 64
3.4 Real Robomealmate Implementation & ROS Integration ................................................... 65
3.4.1 ROS2 control understanding ........................................................................................... 66
3.4.2 ROS2 control integration with simulated robomealmate ................................................ 70
4 | P a g e
3.4.3 ROS2 control integration with real robomealmate .......................................................... 72
3.5 Open AI and google cloud integration for Realtime conversation ...................................... 78
4 Standards, Specifications, and Constraints ................................................................................... 79
4.1 Standards and Specifications ............................................................................................... 79
4.2 Design and constrains .......................................................................................................... 79
5 Results and Analysis ..................................................................................................................... 80
5.1 Localization and mapping ................................................................................................... 80
5.2 Navigation ........................................................................................................................... 81
5.3 Realtime conversation and order taking .............................................................................. 81
6 Discussion ..................................................................................................................................... 82
6.1 Resolution of the Problem ................................................................................................... 82
6.2 Contributions to the Fields .................................................................................................. 82
6.3 Logical Implications of Results ........................................................................................... 82
6.4 Limitations ........................................................................................................................... 82
7 Conclusions and Recommendation ............................................................................................... 83
7.1 Summary of Key Results ..................................................................................................... 83
7.2 Recommendations for Improvement ................................................................................... 83
7.3 actual conclusion ................................................................................................................. 83
7.4 Open problems ..................................................................................................................... 83
7.5 Future Work ......................................................................................................................... 83
7.6 Final remarks ....................................................................................................................... 84
References ............................................................................................................................................. 85
List of Figures
Figure 3.1Base dimensions ................................................................................................................... 13
Figure 3.2Feet Dimensions ................................................................................................................... 14
Figure 3.3Body Dimensions ................................................................................................................. 14
Figure 3.4Head and Robot Dimensions ................................................................................................ 15
Figure 3.5Arduino Mega 2560 .............................................................................................................. 16
Figure 3.6Raspberry PI4 ....................................................................................................................... 16
Figure 3.7Ultrasonic Sensor(HC-SR04) ............................................................................................... 16
Figure 3.8RPLidar A1 ........................................................................................................................... 17
Figure 3.9Raspberry camera ................................................................................................................. 17
Figure 3.10DC-Motor ........................................................................................................................... 17
Figure 3.11DC Motor Encoder ............................................................................................................. 18
Figure 3.12IBT 2 Motor driver ............................................................................................................. 18
Figure 3.13LM2596 DC-DC ................................................................................................................. 18
Figure 3.14LCD 7inch display .............................................................................................................. 19
Figure 3.15Speakers .............................................................................................................................. 19
Figure 3.16microphone ......................................................................................................................... 19
5 | P a g e
Figure 3.17Robot state publisher and URDF diagram .......................................................................... 22
Figure 3.18 robomealmate package tree ............................................................................................... 27
Figure 3.19 robomealmate package creation and build ........................................................................ 27
Figure 3.20 robomealmate state publisher launch................................................................................. 28
Figure 3.21 topic list after launch publisher ......................................................................................... 28
Figure 3.22 robomealmate joint gui launch .......................................................................................... 29
Figure 3.23 robomealmate TF ............................................................................................................... 29
Figure 3.24 robomealmate Model ......................................................................................................... 30
Figure 3.25 robomealmate simulation launch ....................................................................................... 34
Figure 3.26 robomealmate simulation launch ....................................................................................... 34
Figure 3.27 topic list after simulation ................................................................................................... 35
Figure 3.28 understanding control diagram .......................................................................................... 35
Figure 3.29 control gazebo ................................................................................................................... 36
Figure 3.30 controlling simulated robomealmate diagram ................................................................... 36
Figure 3.31 simulation with gazebo controller ..................................................................................... 37
Figure 3.32 teleop twist keyboard with cmd_vel topic ......................................................................... 38
Figure 3.33 simulated robomealmate moving ....................................................................................... 38
Figure 3.34 mapping ............................................................................................................................. 39
Figure 3.35 localization 1 ..................................................................................................................... 40
Figure 3.36 localization 2 ..................................................................................................................... 40
Figure 3.37 localization 3 ..................................................................................................................... 41
Figure 3.38 SLAM 1 ............................................................................................................................. 41
Figure 3.39 SLAM 2 ............................................................................................................................. 42
Figure 3.40 SLAM 3 ............................................................................................................................. 42
Figure 3.41 map frame 1 ....................................................................................................................... 43
Figure 3.42 map frame 2 ....................................................................................................................... 43
Figure 3.43 map and odom topics ......................................................................................................... 44
Figure 3.44 gazebo obstacles ................................................................................................................ 45
Figure 3.45 rviz2 view .......................................................................................................................... 46
Figure 3.46 launching SLAM toolbox .................................................................................................. 46
Figure 3.47 rviz2 map view .................................................................................................................. 47
Figure 3.48 rviz2 map save ................................................................................................................... 47
Figure 3.49 localization mode ............................................................................................................... 48
Figure 3.50 robomealmate navigation diagram .................................................................................... 50
Figure 3.51 robomealmate simulated navigation video ........................................................................ 55
Figure 3.52 motor driver concept .......................................................................................................... 56
Figure 3.53 PWM concept .................................................................................................................... 57
Figure 3.54 motor controller concept .................................................................................................... 57
Figure 3.55 open loop control concept .................................................................................................. 58
Figure 3.56 closed-loop control concept ............................................................................................... 59
Figure 3.57 IBT_2 motor driver ............................................................................................................ 60
Figure 3.58 motor, motor drivers, and encoders ................................................................................... 62
Figure 3.59 raspberry camera connection ............................................................................................. 63
Figure 3.60 lidar connected to raspberry .............................................................................................. 63
Figure 3.61 robomealmate .................................................................................................................... 65
Figure 3.62 framework concept ............................................................................................................ 66
Figure 3.63 controller manager ............................................................................................................. 66
Figure 3.64hardware interfaces ............................................................................................................. 67
Figure 3.65 multiple hardware interface ............................................................................................... 68
Figure 3.66 hardware side ..................................................................................................................... 68
Figure 3.67controller side ..................................................................................................................... 69
6 | P a g e
Figure 3.68 video for ros2 control integration for simulated robomealmate ........................................ 72
Figure 3.69 ros2 control concept recap ................................................................................................. 73
Figure 3.70 hardware interface ............................................................................................................. 74
Figure 3.71 GPT and Google cloud services ........................................................................................ 78
Figure 5.1 mapping and localization results ......................................................................................... 80
Figure 5.2 robomealmate navigation .................................................................................................... 81
List of tables
Table 3.1cost table ................................................................................................................................ 20
Table 3.2 differences between local and global costmap ...................................................................... 53
Table 3.3 IBT_2 pin diagram ................................................................................................................ 60
Table 3.4 motor drivers --> Arduino Connections ................................................................................ 60
Table 3.5 Encoder pins .......................................................................................................................... 61
Table 3.6 Encoder --> Arduino connections ......................................................................................... 61
Table 3.7 ultrasonic pins ....................................................................................................................... 62
Table 3.8 ultrasonic connections ........................................................................................................... 63
Table 3.9 Bluetooth and RGB led strip connections ............................................................................. 63
Table 3.10 components voltage and current .......................................................................................... 64
7 | P a g e
Abstract
The RoboMealMate project is designed to harness the technological revolution in artificial
intelligence and robotics in restaurant automation, customer service, and smart hospitality.
The project aims to enhance customer service efficiency, reduce operational costs, and
provide customers with a unique experience. The robot will serve as a waiter. It will navigate
autonomously in restaurant environments, interacting with customers, taking orders, and
conversing with them.
• Importance aspects to cover:
1. Robot interaction: implement a friendly design that seamlessly interacts with
customers the design will include a screen for signs and emotions and a
microphone for speech recognition.
2. Navigation and Obstacle Avoidance: implement LiDAR-based SLAM
(Simultaneous Localization and Mapping) and image processing for accurate
movement and obstacle avoidance.
3. AI: using speech recognition, order processing, and image-based learning to
enhance adaptability and add human-like behavior.
4. Safety and reliability: ensure that the robot is secure for restaurant
environments, especially crowded environments by choosing the right
components, sensors, and design that is compatible with the work
environment.
• Objectives:
1. Enhance customer service.
2. Increase productivity and accuracy in the work environment.
3. Reduce operational costs.
4. Harnessing of AI and robotics revolution in serving people and
RoboMealMate is just the beginning.
• Methodology:
1. Hardware process:
Create a design for the robot body, then set up and connect the components,
such as the Raspberry Pi, Arduino Mega, sensors(LiDAR, ultrasonic, etc.),
camera, Screen, motors, and power supplies. Make sure that all components
are connected correctly.
2. Software process:
In this process, we will implement the algorithms for AI parts like OpenAI
API, speech-to-text and text-to-speech APIs, image processing, LiDAR-base
SLAM, ...etc.
3. Testing and debugging process: Test the robot in different situations, ensure that the
functionalities are working correctly, and handle the errors.
• Similar Projects:
While other restaurant robots exist, such as Pepper and Bear Robotics’ Servi,
RoboMealMate aims to stand out by incorporating advanced AI for interactive
communication, adaptable learning, and seamless integration in small to medium-
sized restaurant spaces.
8 | P a g e
Chapter 1
1 Introduction
1.1 Background:
When we talk about restaurants, customer satisfaction is one of the most important elements
for the success of restaurants, as customer satisfaction means more sales, more sales means
more profit, and more profit means achieving financial success. But there are many problems
facing the restaurant sector, especially in the field of customer service.
One of the main problems in restaurants is the problem of taking orders, taking orders
problem manifests in several ways, common issues are miscommunication between staff and
customers manual entry errors slow service, inflexibility, and lack of personalization.
The increasing demand for autonomous servicing in restaurants where efficiency, accuracy,
and customer experience are paramount, leads to a gap in how a machine or robot achieves
these goals without humans. Before the AI revolution that was hard to accomplish, but these
days with AI tools available around us and easy access to the network the mission has
become possible.
RoboMealMate came to solve most of the taking orders issues, it is an AI-driven robot that
navigates autonomy in a restaurant environment engages with customers takes orders, and
makes conversations with them, then places the orders for the kitchen.
1.2 Objectives:
RoboMealMate aims to add a distinctive experience to customers through their interaction
with a human-like machine that meets their needs with the required speed and accuracy. It
also aims to solve problems related to taking orders and making customer service more
accurate and fast in the restaurant environment. Harnessing technological developments in
the field of AI and robotics in customer service, design a cost-effective and scalable solution
that can be deployed in diverse restaurant environments.
1.3 Importance and Significance:
The importance of RoboMealMate lies in meeting the global market’s demands for self-
service and harnessing technology in business. The field of restaurant automation is expected
to grow significantly as companies strive to achieve efficiency and customer satisfaction.
Inaccurate ordering, slow service, and excessive labor expenses are just a few of the
inefficiencies that plague traditional restaurant operations. In order to solve these problems,
RoboMealMate offers a dependable and automated solution that raises operational
consistency and service quality. Additionally, the robot's capacity to connect with patrons
9 | P a g e
through individualized interactions and natural language processing improves the eating
experience and distinguishes it from conventional service techniques.
Additionally, RoboMealMate is an advancement in scalable and sustainable technology. Its
design may be modified to fit various restaurant settings, giving it a flexible option for
businesses with varying clientele and sizes. RoboMealMate not only satisfies present market
demands but also establishes itself as a pioneer in restaurant technology by satisfying the
increasing demand for intelligent automation.
1.4 Report Overview:
This report provides a comprehensive overview of the RoboMealMate project, detailing the
design, development, and implementation of an AI-driven robotic waiter for restaurant
environments. The document is structured as follows:
• Chapter 1: Introduction Introduces the project, its objectives, and significance in the
restaurant industry.
• Chapter 2: Theoretical Background and Previous Work – Reviews relevant
technologies, similar projects, and advancements in AI and robotics.
• Chapter 3: Methodology Covers the physical and hardware design, software
architecture, simulation process, ROS2 integration, and AI components.
• Chapter 4: Implementation & Testing Details the integration of AI and machine
learning for real-time customer interaction, speech recognition, and autonomous
navigation.
• Chapter 5: Results & Discussion Presents testing results, challenges encountered, and
potential improvements.
• Chapter 6: Conclusion & Future Work Summarizes findings and suggests future
enhancement
10 | P a g e
Chapter 2
2 Theoretical Background and Previous Work
This chapter provides an overview of the theoretical concepts and previous research that
underpin the RoboMealMate project. It aims to establish a foundation for understanding the
key technologies used in the project and to highlight relevant work in the fields of robotics,
artificial intelligence (AI), computer vision, natural language processing (NLP),and
autonomous navigation.
2.1 Robotic Service Systems in Hospitality
Robotics in the hospitality sector has garnered significant interest in recent years, particularly
in food service. Robots are increasingly used to automate repetitive tasks, improve customer
experience, and enhance operational efficiency. In restaurants, robots are used for a variety of
functions, such as serving food, greeting customers, and even preparing meals. These systems
leverage technologies like computer vision, machine learning, speech recognition, and
autonomous navigation.
One notable example is the Pepper robot, developed by SoftBank Robotics, which has been
used in multiple industries for customer service applications, including restaurants. Pepper
uses facial recognition and natural language processing to interact with people, offering a
friendly and intuitive experience. However, despite these advancements, many existing
robotic systems still face challenges in terms of mobility, adaptability to dynamic
environments, and the complexity of human-robot interactions.
2.2 Key Technologies in RoboMealMate
2.2.1 Artificial Intelligence (AI) in Robotics
AI plays a pivotal role in modern robotics, enabling machines to learn from their
environment, make decisions, and improve performance over time. In the RoboMealMate
project, AI technologies are used to enhance the robot’s ability to interact with humans,
understand commands, and make autonomous decisions.
2.2.2 Natural Language Processing (NLP)
NLP is a crucial component for enabling effective human-robot interaction in the
RoboMealMate project. NLP algorithms are used to interpret voice commands and process
spoken language, allowing the robot to take orders from customers and respond accordingly.
Technologies like Google Cloud Speech-to-Text and OpenAI GPT are used to transcribe
speech into text and generate natural language responses, respectively. These tools enhance
the robot’s ability to understand and interact in a conversational manner, making the user
experience more intuitive and engaging.
11 | P a g e
Incorporating NLP into the system allows for multilingual capabilities, enabling the robot to
serve a broader customer base by processing commands in different languages. This feature is
particularly valuable in globalized restaurant environments where diverse language speakers
interact with the service system.
2.2.3 SLAM(Simultaneous Localization and Mapping)
The robot uses SLAM (Simultaneous Localization and Mapping) techniques to navigate its
environment autonomously. SLAM allows the robot to build a map of its surroundings while
simultaneously determining its position within that map, which is essential for safe and
efficient movement in dynamic environments, such as busy restaurant floors. The use of
Slamtec RPLIDAR A1M8 provides high-precision scanning capabilities for this task.
2.2.4 Autonomous Navigation and Obstacle Avoidance
Autonomous navigation is central to the functionality of the RoboMealMate robot. The robot
is equipped with ultrasonic sensors, and LiDAR, to detect obstacles and move through its
environment efficiently. Path planning and obstacle avoidance are achieved through
sophisticated algorithms that integrate data from these sensors to determine the best routes
and avoid collisions with both static and dynamic obstacles.
The integration of omni wheels in the robot’s design allows it to move in any direction, providing
greater flexibility in movement and improving its ability to navigate tight spaces in crowded
restaurant environments.
2.3 Previous Work in Service Robotics
Several studies have explored the use of robots in service-oriented environments, including
restaurants. For example, Savioke's Relay robot is designed to deliver items in hotels, while
Bear Robotics' Servi robot is employed in restaurants to serve food to customers
autonomously. These robots rely on AI, computer vision, and autonomous navigation to
perform their tasks. However, there are still gaps in handling more complex interactions, such
as understanding diverse customer preferences, dynamically adjusting to busy environments,
and providing a personalized experience.
Another notable example is the Karakuri robot, which specializes in personalized food
service. It combines robotics with AI to create dishes according to customer preferences,
enhancing the dining experience. This type of service personalization, along with real-time
adaptive behavior, is an area of focus in the RoboMealMate project.
2.4 Challenges and Opportunities
The integration of multiple advanced technologies into the RoboMealMate project introduces
both challenges and opportunities for future research and development. One of the primary
challenges is the dynamic nature of restaurant environments. Human movement, changing
obstacles, and unpredictable interactions create difficulties in navigation and object
recognition. Additionally, while current NLP technologies are impressive, they still struggle
with accents, noisy environments, and understanding complex or ambiguous speech.
Another challenge is the robot’s ability to understand and react to social cues, such as body
language or emotional states, in a way that feels natural to human users. Research into
emotion-aware robotics and personalized service delivery continues to evolve, and the
12 | P a g e
RoboMealMate project aims to push the boundaries in this area by incorporating more
sophisticated emotion recognition and response systems.
On the other hand, these challenges present exciting opportunities for innovation. The
RoboMealMate project stands at the intersection of AI, robotics, and human-computer
interaction, providing a platform for further advancements in service robots, customer
experience, and AI-driven automation.
13 | P a g e
Chapter 3
3 Methodology
In this chapter, we will provide detailed steps we have done to accomplish this work
3.1 Design:
This section is divided into 3 subsections as follows
I. Physical design: in this subsection, we will talk about the shape, dimensions, material,
and parts.
II. Hardware design: In this subsection, we will discuss hardware components, their uses,
and their costs.
III. Software design: This subsection will discuss the software approach, ROS2(robot
operating system), ROS2 topics, ROS2 nodes, ROS2 packages & plugins.
3.1.1 physical design:
When we say the robot will serve as a waiter We must take into consideration that the shape,
dimensions, and design must be consistent with the task it will perform, so the shape and
dimensions should be human-like, we achieved this by following:
A. Base: It will be the main part to which the rest of the body will be attached and will
be primarily responsible for movement and holding hardware components for
movement.
❖ Dimensions:
✓ Height: 15 cm.
✓ Width: 60 cm.
✓ Length: 30 cm.
Figure 3.1Base dimensions
14 | P a g e
Figure 3.3Body Dimensions
B. Left and right feet: it will be the middle part between the base and body and will be
a wire tunnel between the base and the body.
❖ Dimensions:
✓ Height: 50 cm.
✓ Width: 15 cm.
✓ Length: 15 cm.
✓ Distance between: 10 cm
C. Body: it will attach to feet, it contains batteries, a low-level controller, and
speakers.
❖ Dimensions:
✓ Height: 40 cm.
✓ Width: 40 cm.
✓ Length: 15cm
Figure 3.2Feet Dimensions
15 | P a g e
D. Head: it will attach to the body, it contains a screen, camera, and high-level
processor.
❖ Dimensions:
✓ Height: neck 10 cm, head 20 cm.
✓ Width: neck 15 cm, head 30 cm.
✓ Length: 15 cm.
So the total dimensions:
▪ Highet: 125 cm.
▪ Width: 60 cm.
▪ Length: 30 cm.
The material of the robot will be light wood, for lighter weight and a cohesive
body.
3.1.2 Hardware design:
3.1.2.1 Hardware components:
Below are the hardware components used in the project and the responsibility for each one:
1. Arduino Mega 2560:
Figure 3.4Head and Robot Dimensions
16 | P a g e
Figure 3.7Ultrasonic Sensor(HC-SR04)
Its low-level controller is responsible for taking sensor readings sending them to the
high-level controller via serial communication, and generating PWM signals for
motor drivers depending on high-level controller orders.
2. Raspberry Pi 4:
Its high-level controller is responsible for decision-making, gives orders to the low-
level controller via serial communication, and is responsible for GPT, voice
recognition scripts, and the ROS2 environment, we can say it’s the brain of our robot.
3. Ultrasonic sensors (HC-SR04):
It uses ultrasonic sound waves to detect the distance to an object, it is responsible for
detecting near and short-distance obstacles.
4. RPLIDAR A1 360:
Figure 3.5Arduino Mega 2560
Figure 3.6Raspberry PI4
17 | P a g e
RPLIDAR A1 is based on the laser triangulation ranging principle and uses high-
speed vision acquisition. It mainly generates a local map using SLAM (simultaneous
localization and mapping) algorithms. This will allow our robot to know the
environment around him and detect obstacles. It sends its data readings to Raspberry
via serial.
5. Camera:
It is mainly used to monitor the robot environment.
6. Motors (jgb37-520):
Figure 3.8RPLidar A1
Figure 3.9Raspberry camera
Figure 3.10DC-Motor
18 | P a g e
Its 12V DC motor with gearbox is responsible for robot movement.
7. Encoders:
It provides feedback about the motor's position, speed, and direction of rotation., and
it's used in closed-loop control.
8. Batteries:
✓ 12v 7Ahm: to power the motors.
✓ 2 * 3.7 V 2600 mAhm: to power Arduino mega and sensors.
✓ 10000 mAhm power bank to power Raspberry.
9. Motor Drivers (IBT 2):
It used for control DC motors, direction control, speed control, and high current and
voltage handling.
10. Buck converter (LM2596 DC-DC):
Figure 3.11DC Motor Encoder
Figure 3.12IBT 2 Motor driver
Figure 3.13LM2596 DC-DC
19 | P a g e
It is used to reduce the input voltage to a lower, stable output voltage.
11. Display:
It will be used for monitoring the robot's face or interactions.
12. Other components:
✓ Wheels.
✓ Wiers.
✓ RGB led strip.
13. Speakers:
It is used for sound output.
14. Microphone:
It is used for voice input and voice recognition.
Figure 3.14LCD 7inch display
Figure 3.15Speakers
Figure 3.16microphone
20 | P a g e
3.1.2.2 Quantities and costs:
Component name Quantity Price($) subtotal
Arduino Mega 2560 1 30 30
Raspberry Pi 4 1 110 110
Ultrasonic sensors (HC-SR04) 4 5 20
RPLIDAR A1 1 120 120
Raspberry camera v2 1 45 45
jgb37-520 motors & EN 4 25 100
12v battery 1 22 22
3.7 lio-into battery 2 5 10
Power bank 1 22 22
Motor Drivers (IBT 2) 4 15 60
LM2596 DC-DC 2 5 10
Display 1 100 100
Wheels 4 15 60
Wires 30 30
wood 60 60
Buttons 4 5 20
speakers 2 10 20
microphone 1 10 10
RGB strip 1 10 10
Total 35 859
Table 3.1cost table
3.1.3 Software design:
When we think about our robot it is very complex and has a set of complex operations, so
when we think about the software running everything together it should be flexible and
powerful enough to manage all the processes, To achieve this goal we will go with ROS2
(Robot Operating System).
ROS2 has large features and the following are the most important:
a) Improved communication middleware.
b) Real-time capabilities.
c) Cross-platform compatibility.
d) Support for Microcontrollers.
e) Time Synchronization.
So ROS is for high-level controlling and decision-making, and Arduino C code is for low-
level controlling, and there is a bridge between low-level controlling and high-level
controlling to send and receive data between them.
21 | P a g e
3.1.3.1 ROS2 Overview:
The Robot Operating System (ROS) is a set of software libraries and tools for building robot
applications. In the following, we will review the most important concepts and terminology
so that we can understand the mechanism of the upcoming work:
▪ Nodes: is a basic building block, each node is a standalone process responsible for a
specific functionality, for example controlling motors.
▪ Topics: it is the way for communication between nodes via the publish-subscribe
model, a node publishes the data on a topic and others subscribe to the topic to receive
the data.
▪ Messages: it is the text, sensor data, numbers, or more complex information that the
node can exchange with others.
▪ Services: allowing nodes to send a request and receive a response.
▪ Actions: support long-running tasks, providing feedback and result messages during
execution.
▪ Parameter Server: it is a storage system containing the configurations that nodes
can access and modify at runtime.
▪ Packages: in ROS codes are organized in packages each package contains libraries,
configuration files, and more.
▪ Launch Files: XML or YAML-based files used to start multiple nodes and configure
parameters simultaneously.
3.1.3.2 Arduino:
We will use the Arduino code and libraries for low-level controlling, C language provides a
set of libraries that can handle sensor reading, motor controlling, and more.
3.2 Simulation:
Before we dive into the process I will explain two important tools provided by ROS for
simulation purposes:
a) Rviz2 (Robot Visualization 2): It provides a graphical interface for users to view their
robot, sensor data, maps, and more, it is a next-generation visualization tool for
ROS2, We will learn more about its features during work.
b) GAZEBO: It provides a robust platform for simulating robots in complex, real-world-
like environments, allowing developers to test and refine their robotic applications
without needing physical hardware.
In the following, we will dive into the simulation process which is divided into a set of steps
we will explain everything as we go
3.2.1 URDF (Unified Robot Description Format):
It is a file that tells ROS about the structure of our robot and how it's all set up, It serves as a
standard way to represent a robot model, making it easy to share, simulate, and integrate with
ROS tools and libraries.
22 | P a g e
First, we should understand how these files work, XML based files use a tool called XACRO,
XACRO combines a set of files in a single complete URDF, passes the URDF file to a node
called robot_state_publisher publishes the robot TF transforms, the data will be available on a
topic called /robot_description, if there are any joints in the URDF it expect input values
published to /joint_states topic, and as a test, we will use joint_state_publisher_gui node for
input values, in the following is a diagram explain the process.
Our URDFs are split into 4 files:
1. robot.urdf.xacro: this file contains all xacro files and it’s the one we will send to
robot_state_publisher.
2. robot_core.xacro: this file contains the main components of the robot wheels, base,
feet, body, and head.
3. lidar.xacro: this file contains the description of the lidar and later its driver.
4. camera.xacro: this file contains the description of the camera and later its driver.
3.2.1.1 robot_core.xacro file:
In this file, we will describe the main components of our robot to show the full code of this
file CTRL+click on the file name.
1. Base:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Figure 3.17Robot state publisher and URDF diagram
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/robot_core.xacro
23 | P a g e
The link tag represents a part of the robot this part is called base_link, visual tag
describes this part by specifying its origin, geometry, and material, so our base center
is at point (0.15, 0.0,0.075), has a box shape with dimensions 30 cm, 60 cm, 15 cm,
and its color is blue and the collision tag is the same as visual tag and does not have a
joint because it’s the root link.
2. wheels:
❖ Link:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Our link is called front_left_wheel it’s a cylindrical shape with a 3.5 cm radius
and 4 cm length.
❖ Joint:
1.
2.
3.
4.
5.
6.
The joint tag describes how is the parent of the child is linked and in what
origins connected to it, type of this joint continuous or fixed, continuous means
the joint is rotating around an axis, and fixed means it is in a static position, and
the axis tag specifies its rotation axis, so our joint the child link is
front_left_wheel link and the parent is the base_link, so the front_left_wheel
connected to the base with continuous rotation around the z-axis and it 25 cm
forward, 32 cm to the left and it is flipped around that means z-axis pointing at
the x-axis.
Repeat the process for front_right_wheel, back_left_wheel, and
back_right_wheel
Concerning origins, positions, and rotation axis.
3. Feet:
❖ Link:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
24 | P a g e
11.
12.
13.
14.
left_feet box shape with dimensions of 15 cm 15 cm 38 cm black colored.
❖ Joint:
1.
2.
3.
4.
5.
6.
left_feet connected to the base_link with a fixed joint at point (15,15,34).
Repeat the process for right_feet concerning origin and position.
4. Body:
❖ Link:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
The body is a boxed shape with dimensions 15 cm, 45 cm, 40 cm.
❖ Joint:
1.
2.
3.
4.
5.
6.
The body is connected to the base_link at point (15,0,73).
5. Head:
❖ Link:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
25 | P a g e
13.
The head is a boxed shape with dimensions 15 cm, 30 cm, 30 cm.
❖ Joint:
1.
2.
3.
4.
5.
6.
The head is connected to the base_link at point (15,0,108).
3.2.1.2 lidar.xacro file:
This file will contain the description for Lidar and it will specify the geometry, position, and
where it is connected this file will contain the gazebo driver for the lidar later in detail to
show the full code of this file CTRL+click on the file name.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
3.2.1.3 camera.xacro file:
This file will contain the description for the camera and it will specify the geometry, position,
and where it is connected this file will contain the gazebo driver for the lidar later in detail to
show the full code of this file CTRL+click on the file name.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/lidar.xacro
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/camera.xacro
26 | P a g e
17.
18.
19.
Note that we have a camera joint and an optical joint the camera joint is for the case of the
camera and the optical joint is for the lens of the camera the one we will use in the gazebo
driver is the optical joint.
3.2.1.4 robot.urdf.xacro file:
this file contains all the above files and this file will be passed to
xacro → robot_state_publisher
1.
2.
3.
4.
5.
6.
7.
8.
Now we can create our robot state publisher launch file, the launch file is a Python script that
can run multiple ROS nodes autonomously, and we will use it to pass our URDF to the robot
state publisher.
3.2.1.5 rsp.launch.py file:
1. def generate_launch_description():
2. # Check if we're told to use sim time
3. use_sim_time = LaunchConfiguration('use_sim_time')
4. # Process the URDF file
5. pkg_path = os.path.join(get_package_share_directory('robomealmate'))
6. xacro_file = os.path.join(pkg_path,'description','robot.urdf.xacro')
7. robot_description_config = xacro.process_file(xacro_file)
8. # Create a robot_state_publisher node
9. params = {'robot_description': robot_description_config.toxml(), 'use_sim_time':
use_sim_time}
10. node_robot_state_publisher = Node(
11. package='robot_state_publisher',
12. executable='robot_state_publisher',
13. output='screen',
14. parameters=[params]
15. )
16. # Launch!
17. return LaunchDescription([
18. DeclareLaunchArgument(
19. 'use_sim_time',
20. default_value='false',
21. description='Use sim time if true'),
22. node_robot_state_publisher
23. ])
In line 3 we created a variable called use_sim_time it's Boolean and tells the
robot_state_publisher node if we are using simulation time or not and we can assign this
value when we launch the file we can pass it as an argument, pkg_path tells the node the path
to our package directory, xacro_file is the path to our robot URDF file, then at line 7 we pass
our URDF to xacro to create a single description file as we mentioned earlier, then we create
our robot state publisher node by mention the package name, executable file, and parameters,
in our case the package name is robot_state_publisher and same for executable file and a
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/robot.urdf.xacro
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/robot.urdf.xacro
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/launch/rsp.launch.py
27 | P a g e
variable passes the parameters contains robot description configuration and use_sim_time,
and at the end of the file, we specify the default value for arguments.
By accomplishing this we can visualize the robot on rviz2.
3.2.1.6 visualize robomealmate on rviz2:
Before we dive into visualization let us take a look at our ROS package for the robot, in the
following is the tree of the directory and files:
Figure 3.18 robomealmate package tree
We organized the package as in Figure 3.18. The config directory will contain configuration
files, the description directory will contain xacro files, the launch directory will contain
launch files, and the worlds directory will contain worlds for gazebo.
Let us create our package by ros2 pkg create robomealmate, build it with colcon build –
symlink-install, and source our installation by source install/setup.bash
Figure 3.19 robomealmate package creation and build
28 | P a g e
Let us launch our rsp.launch.py by ros2 launch robomealmate rsp.launch.py command.
Figure 3.20 robomealmate state publisher launch
As we see in Figure 3.20 the robot state publisher identified the links and joints of our robot
and if we run the ros2 topic list command we should see the topics published.
In Figure 3.21 we see that /tf, and /robot_description topics are published, and /joint_state
topic waiting for the values from joint_state_publisher_gui.
Figure 3.21 topic list after launch publisher
29 | P a g e
From Figure 3.22 we can see that our joint_state_publisher_gui detects the continuous joints and it
can assign values to it. Now let us open rviz2 and see our robot.
When opening the rviz2 click on the add button then from the menu select TF to show the robot
transforms.
Now let's show the robot model, by clicking on add then select robot model then ok, on the left side
under topic select topic to be /robot_describtion.
Figure 3.22 robomealmate joint gui launch
Figure 3.23 robomealmate TF
30 | P a g e
Now we are ready to move to the next step gazebo.
3.2.2 Gazebo simulation:
Before we can simulate the robot on the gazebo we have to:
1. Modify the URDF to work with the gazebo by adding physics tags (inertia) and
materials, because the gazebo supports physics engineers.
2. Add the lidar driver to the lidar xacro file to work on the gazebo.
3. Add the camera driver to the camera xacro file to work on the gazebo.
4. Create a simulation launcher file for the launch of the robot simulation.
3.2.2.1 URDF modifications:
1. Add an inertia tag to the robot parts, An inertia tag is used to define the inertia
properties of a robot's links, inertia properties are critical for accurate simulation of
the robot's dynamics, including how it moves, reacts to forces, mainly in
robomealmate robot we have two shapes box and cylinder, so we have to find the
equation of moment of inertia for the box and cylinder. [1] , we created a file called
inertial_macros.xacro this file contains the equations of the moment of the inertia for
box and cylinder and we can include it in robot_core.xacro file and use the macros by
name.
✓ Box moment of inertia:
1.
2.
3.
4.
5.
Figure 3.24 robomealmate Model
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/inertial_macros.xacro
31 | P a g e
8.
9.
✓ Cylinder moment of inertia:
1.
2.
3.
4.
5.
8.
9.
Now we can include the file on robot_core.xacro file
1.
To use the inertial macros we have to the following tag inside the links under the collision
tag:
✓ for box geometry:
1.
2.
3.
The origins should be the same as the link origin if there is no origin for a link
make it zeros.
✓ For cylinder geometry:
1.
2.
3.
Repeat the process for all links in the robot URDF.
2. Add gazebo material to the URDF to see the colors in the gazebo, we can add by
gazebo tag and the reference is for the link we need to add the material as shown
below:
3.
4. Gazebo/Blue
5.
Repeat the process for all links in the URDF.
3.2.2.2 Add lidar gazebo driver
Gazebo packages coms with a bunch of plugins for sensors, the plugins give us the ability to
test sensors in virtual environments, and the plugin we will use for the lidar is called
“libgazebo_ros_ray_sensor.so”, this plugin needs some configurations, below is how to
configure the plugin driver in lidar.xacro file:
1.
2. Gazebo/Red
3.
4. 0 0 0 0 0 0
5. true
6. 10
7.
32 | P a g e
8.
9.
10. 360
11. 1
12. -3.14
13. 3.14
14.
15.
16.
17. 0.3
18. 12.0
19. 0.01
20.
21.
22.
23.
24. ~/out:=scan
25.
26. sensor_msgs/LaserScan
27. laser_frame
28.
29.
30.
The reference link for the plugin is laser_frame, the color of the lidar is red, and the position
in its original origin, we need to see the laser so the visualize tag is true, the update_rate for
readings is 10Hz, the scan should be done horizontal, samples 360 in resolution 1 and the
reading angle between -3.14 to 3.14, the range covered is at min 3 cm and the max is 12
meter, and the plugin we will use for this sensor is “libgazebo_ros_ray_sensor.so”, the ros
topic that the scan data will be published to it is /scan topic the type of data
sensor_msgs/LaserScan, and the frame is laser_frame, by this setting we set up the lidar
sensor, and it is similar to our real lidar.
3.2.2.3 Add camera gazebo driver
The plugin we will use for the camera is “libgazebo_ros_camera.so” and below is how to
configure the plugin driver in the camera.xacro file:
1.
2. Gazebo/Red
3.
4. 0 0 0 0 0 0
5. true
6. 10
7.
8. 1.089
9.
10. 640
11. 480
12. R8G8B8
13.
14.
15. 0.05
16. 8.0
17.
18.
19.
20. camera_link_optical
21.
22.
23.
33 | P a g e
From line 9 to line 13, we are setting up the image properties. Inside the clip tag, we specify the lens
capabilities and the frame that will be used is the camera_link_optical.
3.2.2.4 simulation.launch.py file
As we did before with rsp.launch.py, but this file will be responsible for launching 1 node,
and 2 launch files the files are rsp.launch.py we created before and the file that launches the
gazebo gazebo.launch.py file, and the node is spawning node it responsible for spawn
robomealmate in gazebo environment.
1. rsp launch file:
1. rsp = IncludeLaunchDescription(
2. PythonLaunchDescriptionSource([os.path.join(
3. get_package_share_directory(package_name),'launch','rsp.launch.py'
4. )]), launch_arguments={'use_sim_time': 'true'}.items()
5. )
Note that the argument ‘use_sim_time’ is true because we are using a simulated
environment.
2. Gazebo launch:
1. gazebo = IncludeLaunchDescription(
2. PythonLaunchDescriptionSource([os.path.join(
3. get_package_share_directory('gazebo_ros'), 'launch',
'gazebo.launch.py')]),
4. )
Gazebo launch file, provided by the gazebo_ros package.
3. Run the spawner node from the gazebo_ros package:
4. spawn_entity = Node(package='gazebo_ros', executable='spawn_entity.py',
5. arguments=['-topic', 'robot_description',
6. '-entity', 'robomealmate'],
7. output='screen')
5. Launching:
1. return LaunchDescription([
2. rsp,
3. gazebo,
4. spawn_entity,
5. ])
3.2.2.5 Launch simulation:
Open the terminal navigate to our package directory colcon build to apply new files then run
the ros2 launch robomealmate simulation.launch.py command.
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/launch/simulation.launch.py
34 | P a g e
As we see in Figure 3.25 and Figure 3.26 our simulation launch file was done successfully, the
robot_state_publisher was working, the gazebo launch file was working, and the spawner node
spawned our robot in the gazebo environment, and it's clear that the lidar and camera is working but
lets double check by see the topic list.
Figure 3.26 robomealmate simulation launch
Figure 3.25 robomealmate simulation launch
35 | P a g e
As shown in Figure 3.27 scan topic and camera topics are publishing data and ready for
processing.
Now we are ready to move to the next step driving our simulated robomealmate!!!!.
3.2.3 Gazebo controller (drive simulated robomealmate):
This section is divided into:
1. Understanding control.
2. Gazebo controller setting up.
3. Driving simulated robomealmate.
3.2.3.1 Understanding control:
Each robot needs a control system, the control system is responsible for getting command
velocity as an input, then translating that to motor command for motor drivers, reading the
actual motor speed from motor drivers, then calculating the true velocity, in ros the command
velocity is on a topic called /cmd_vel, and the type of messages is a twist, and the twist
message contains 6 numbers linear velocity on x, y, and z axis, and angular velocity on x, y,
and z, in our robomealmate we will use a differential drive that’s mean we need just the linear
velocity on x for forward and backward movement, and angular velocity on z for rotation left
and right, rather than true velocity we are interested on robot position, the control system can
estimate it by integrating the velocity over time, adding it up in tiny little time steps. This
process is called a dead reckoning, and the resulting position estimate is called our odometry,
the following diagram shows the process.
Figure 3.28 understanding control diagram
Figure 3.27 topic list after simulation
36 | P a g e
As we see before whenever we need ros to interact with the gazebo we need a plugin, The
gazebo_ros package provides a control plugin for a differential drive.
Figure 3.29 control gazebo
This plugin will interact with core gazebo code which simulates motors and others, and we will
replace the previous system for /joint_state with this as shown in Figure 3.30.
Figure 3.30 controlling simulated robomealmate diagram
37 | P a g e
Instead of faking /joint_state with joint_state_publisher_gui, the gazebo robot spawned from
the robot description, and the joint state is published by the control plugin, the plugin also
broadcasts a transform for a new frame called odom (which is like the world origin, the
robot's start position), to base_link, which lets any other code know the current position
estimate for our robomealmate.
3.2.3.2 Gazebo controller setting up:
To be able to use the plugin we have to add a new file called gazebo_controller.xacro in the
description directory and include it on robot.urdf.xacro file, the file contains the configuration
and parameters for the plugin.
1.
2.
3.
4.
5.
6. 2
7. front_left_wheel_joint
8. back_left_wheel_joint
9. front_right_wheel_joint
10. back_right_wheel_joint
11. 0.6
12. 0.07
13.
14. 200
15. 10.0
16.
17. odom
18. base_link
19. true
20. true
21. true
22.
23.
24.
We have 2 wheel pairs one on the left with two wheels and one on the right with two wheels,
we specify our wheel joints on the left and right, the separation of the wheel means the
distance between the wheels pair and in our case is 60 cm, wheel diameter is 7 cm, the max
torque for the motors for wheels is 200 rpm, max wheel acceleration 10, the odometry topic
will publish is odom, and the base frame for our robot is base_link, we need to publish odom,
odom transform, and wheel transforms, by this we are setting up the diff drive plugin and we
are ready to control our virtual robomealmate.
3.2.3.3 Driving simulated robomealmate
Before we run simulation.launch.py we need to make sure to include the
gazebo_controller.xacro file into robot.urdf.xacro file.
Figure 3.31 simulation with gazebo controller
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/description/gazebo_controller.xacro
38 | P a g e
As we see in Figure 3.31 our configuration for the diff plugins is taken and now we have
odom topic and the controller subscribes to the cmd_vel topic.
To drive our simulated robomealmate we will use a ros tool called teleop twist keyboard this
tool publishes twist messages to cmd_vel topic using the keyboard.
As we see in Figures 3.31 & 3.32 the teleop twist is publishing command velocity to cmd_vel
topic and gazebo controller subscribes the cmd_vel and our simulated robomealmate starts
moving.
What have we done so far?
1. Created robomealmate URDF
Figure 3.32 teleop twist keyboard with cmd_vel topic
Figure 3.33 simulated robomealmate moving
39 | P a g e
2. Showed robomealmate on rviz2
3. Created camera and lidar driver for gazebo.
4. Simulated robomealmate in gazebo.
5. Created launch files for simulation
6. Integrated gazebo control with ros to drive robomealmate in virtual environments.
Now we are ready for the next step SLAM.
3.2.4 SLAM (Simultaneous Localization and Mapping):
At the end of this section, simulated robomealmate will be able to generate a map of the
environment around him and localize itself within the map.
What are we going to do?
1. Understand SLAM.
2. Integrate the SLAM toolbox coming with ros with simulated robomealmate.
3. Generate a map for the virtual environment.
3.2.4.1 What is SLAM?
SLAM is an acronym that stands for Simultaneous Localization and Mapping, so what is
localization and mapping?
Let’s start with mapping, imagen you won't make a map for your street, but you have got a
phone on you so that you can track your GPS location and as you walk along the street you
take note of what you see, for example, orange house on the left greenhouse on the right, then
a corner, turn the corner red roof on the right, white fence on the left and so on, once you finished
you have a nice little map of your street.
Figure 3.34 mapping
40 | P a g e
Now you have got your map you can use it for localization, let's say your phone battery runs
out so you have no GPS and you need to figure out where you are.
You can see a white fence on your right and a red roof further up on the left.
Figure 3.36 localization 2
Figure 3.35 localization 1
41 | P a g e
Using the map you will be able to pretty accurately pinpoint your location wherever you are,
you found your location in the global coordinate system, you have localized.
The problem with the approach we did is that we needed that GPS on our phone in the first
place to make an accurate map but sometimes we might only have a GPS coordinate for our
starting position or no GPS at all in that case we need to SLAM, we need simultaneously
localize and map.
From our starting position, we might be able to see the orange house and the greenhouse.
As we walk we will keep an eye on where they are and consequently, where we are compared
to them, we could use our stride length to help us with this position estimate.
Figure 3.37 localization 3
Figure 3.38 SLAM 1
42 | P a g e
Then whenever we see a new object we know where it is because we know where we are and
it all goes on continuously.
Congratulations, we just slammed and the result may not be as accurate as with the GPS but
it's much better than having no map at all.
SLAM comes in many different forms with many different algorithms, but often we can
categorize most SLAM methods into one of two categories, there is feature or Landmark
SLAM, and grid SLAM, what we just saw was feature SLAM, so our features or our
Landmarks for things like red roof or white fence, grid slam, on the other hand, is where we
divide the world we are seeing up into a series of cells and each cell can either be occupied or
unoccupied or somewhere in between, slam toolbox we are going to use on our
robomealmate is a grid map-based approach, so we are going to use grid SLAM and 2D lidar.
Figure 3.40 SLAM 3
Figure 3.39 SLAM 2
43 | P a g e
3.2.4.2 Integrate SLAM toolbox
Before we dive into the process, we want to introduce a new frame, the map frame, we can
express the location of base_link compared to the map frame so we can have base_link
compared to the odom frame or compared to the map frame.
This gives us a new problem though which is that a frame in ROS can only have one parent.
The way to get around this is that the code will need to take the position estimate from SLAM
along with the current odom to the base_link transform and calculate the appropriate map to
the odom transform, that means we can get the base_link position relative to either reference
point with no troubles and should move smoothly but drift compared to odom and jump
around but generally stay correct over time compared to the map.
Figure 3.41 map frame 1
Figure 3.42 map frame 2
44 | P a g e
As well as the odom and map frames you will also find odom and map topics.
Which contains data about the odometry and the map, the odom topic contains basically the
same position information as the odom to base_link transform, but it also contains the current
velocity and the associated covariances, the map topic contains the actual occupancy data for
the grid map so that it can be shared to other nodes.
One other thing to be aware of is that in addition to the base_link frame some SLAM systems
also like you to have a footprint frame, and that’s because some robots can move up and
down in 3D the base_link will move up around z, but we need to treat the SLAM problem as
a 2D SLAM problem, so the base footprint frame is kind like a shadow of the base_link
frame on the ground, so it stuck to the x,y plane z equal zero it's stuck on the ground and it
will move around underneath the base_link and be used for SLAM.
First, let us add the footprint link to our URDF (robot_core.xacro):
1.
2.
3.
4.
5.
6.
7.
SLAM toolbox has a few different modes that it can run in and a lot of different options, the
mode we are going to be using is called online asynchronous, so online means we’re running
it live rather than running it through some previously logged data, and asynchronous means
that don’t care about processing every scan instead say that our processing rate is a bit slower
than the scan rate we just always want to be processing the latest scan even if that means we
miss one occasionally.
SLAM toolbox comes with some launch files and params files that help us to set the options
so we are going to take one of those params files and copy it into our local directory config,
the file name mapper_params_online_async.yaml.
robomealmate@ubuntu:~$cp/opt/ros/foxy/share/slam_toolbox/config/mapper_params_online_asynce
.yaml \
> robomealmate/src/robomealmate/config/
Figure 3.43 map and odom topics
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/config/mapper_params_online_async.yaml
45 | P a g e
We are only going to change a couple of the parameters, and the most important ones are:
1. # ROS Parameters
2. odom_frame: odom
3. map_frame: map
4. base_frame: base_footprint
5. scan_topic: /scan
6. mode: mapping #localization
By this we are telling the SLAM toolbox that the odom frame is odom, we need to publish
the map on map topic, the base frame of robomealmate is base_footprint, the scan topic will
take the readings from is scan, and the mode is mapping, there is another mode called
localization it used for prepared map data we will see it later in detail, but know we are good
to go.
3.2.4.3 Launching SLAM
First colcon builds the workspace and launches the simulation as we did before.
Figure 3.44 gazebo obstacles
To test we added some obstacles in the virtual environment.
Now let’s run rviz2, once it launched we need to add a laser scan from the add menu as we
did with TF and robot model, the topic for the laser scan is scan, and we need to add the
image view to view the virtual environment on rviz2 using the camera.
46 | P a g e
Figure 3.45 rviz2 view
As we see in Figure 3.45 the laser scan is publishing data and we can see the red lines clearly,
the camera is working and we can see the virtual environment, note that the fixed frame is
odom and this will change when we run the SLAM to be mapped.
Now let us run the SLAM toolbox by executing the following command.
1. robomealmate@ubuntu:~/robomealmate$ ros2 launch slam_toolbox online_async_launch.py
params_file:= ./src/robomealmate/config/mapper_params_online_async.yaml
use_sim_time:=true
in this command, we are running slam_toolbox using online asynchronous mode with the
parameter file we created before and setting sim time to true.
Figure 3.46 launching SLAM toolbox
47 | P a g e
As we can see from Figure 3.46, the SLAM toolbox has started publishing the map on the
map topic, now we can see our map in rviz2 by adding a map from the add menu and then
setting the topic to map.
Figure 3.47 rviz2 map view
As is clear from Figure 3.47 our map starts appearing, let's drive around and create our map.
Figure 3.48 rviz2 map save
As shown in Figure 3.48 our map was created and updated while driving, Compared to the
virtual environment in Figure 3.44 it is identical.
We can save our generated map by adding SLAM toolbox panel form panels in rviz2, then
name our map and save it, we can use our saved map with localization mode but first, we
have to edit some parameters in mapper_params_online_async.yaml file.
https://github.com/Eng-OsamaMansour/robomealmate/blob/main/src/robomealmate/config/mapper_params_online_async.yaml
48 | P a g e
1. odom_frame: odom
2. map_frame: map
3. base_frame: base_footprint
4. scan_topic: /scan
5. mode: localization #mapping
6. map_file_name: /home/robomealmate/robomealmate/simulated
7. # map_start_pose: [0.0, 0.0, 0.0]
8. map_start_at_dock: true
First, convert the mode to localization then identify the map file path set to start at dock true,
and we can specify a starting position if needed, let’s run the SLAM toolbox again and see
the result.
Figure 3.49 localization mode
As we see from Figure 3.49 our map is loaded and the simulated robomealmate is localized in
it.
We are ready to move to the next step make our simulated robomealmate navigate
autonomously in the virtual environment.
3.2.5 Navigation
Navigation is where we want to plan and execute a safe trajectory from the initial pose to the
target pose.
In this section we will go through:
1. ROS nav2 stack concept.
2. Understanding navigation on robomealmate.
3. Set up the configuration file.
4. Navigation method 1 (live navigation).
49 | P a g e
3.2.5.1 ROS nav2 stack concept
To understand nav2 stack in ROS we must be familiar with some terms and concepts.
• Behavior trees
It is a hierarchical structure made of nodes that represent tasks or behaviors, organized
in a tree-like structure, used for high-level task control and decision-making, it
manages the flow of navigation tasks for example, compute path tasks or follow path
tasks.
• Planner server
The task of a planner is to compute a path to complete some objective function, it uses
A*, and or Dijkstra to compute the path.
• Controller server
It executes velocity commands based on the planner to drive the robot.
• Cost maps
There are two types of cost maps, global cost map covers the entire map for global
path planning, and local cost map covers the area around the robot for obstacle
avoidance, it has 3 layers, static layer represents unmovable objects like walls,
inflation layer adds a buffer zone around the obstacle to ensure safe navigation.
Let us dive through a scenario, suppose we need robomealmate to go to (x,y) from (x0,y0),
the user sends a navigation goal the goal is processed by the behavior tree, and then the
global planner computes an optimal path from the current position to the goal position with
avoiding known obstacles, the local planner generates velocity orders to the controller server
to follow the global path while avoiding dynamic obstacles in real time, if the robot stuck or
detects an issue, recovery actions are triggered, for example replanning or rotating, when
robomealmate reaches the goal a success result send back.
3.2.5.2 Understanding navigation on robomealmate
The first information we are going to need to be able to navigate is an accurate position
estimate, because if we don’t actually know were we are at any point in time how do we
figure out where we trying to go, SLAM is going to provide this position estimate for us, the
second thing we need awareness of the obstacles that are around our robot and this
information can usually come from two sources or sometimes both combined, the first of
those is a map so we have got a SLAM system that has already produced a map for us, we
can use that map as a basis of navigation so the same obstacles we are using for localization
before now become something to avoid it, this is most straightforward with the grid SLAM
type of map, where we take our occupied cells add a buffer around them and that is our
obstacles, the second one is generate a map on the fly using sensor data in our case lidar, this
will prevent our robot from bumping into things which is good, but it means cant plan whole
trajectory from the start it kind of needs figure things out on the fly, so the ideal world is
where we combine both of these, we have an initial map help us plan our trajectory and then
50 | P a g e
use live lidar data to update this with any new obstacles or things change along the way, all
the information will be stored in the cost map.
3.2.5.3 Set up the configuration file
This file is a bit complicated so we will divide it into multiple configurations but in the same
file, the file name is nav2_param.yaml and it will contain the configuration for the nav2
stack, it will be divided into the following:
1. Locale costmap configuration.
2. Global costmap configuration.
3. Planner server configuration.
4. Controller server configuration.
3.2.5.3.1 Locale cost map configuration
The local cost map is a grid map representing the robot's immediate surroundings,
dynamically updated using sensor data.
1. local_costmap:
2. local_costmap:
3. ros__parameters:
4. update_frequency: 5.0
5. publish_frequency: 2.0
6. global_frame: odom
7. robot_base_frame: base_link
8. use_sim_time: True
9. rolling_window: true
10. width: 3
11. height: 3
12. resolution: 0.05
13. robot_radius: 0.3
14. plugins: ["voxel_layer", "inflation_layer"]
15. inflation_layer:
16. plugin: "nav2_costmap_2d::InflationLayer"
17. cost_scaling_factor: 3.0
18. inflation_radius: 0.55
19. voxel_layer:
20. plugin: "nav2_costmap_2d::VoxelLayer"
21. enabled: True
Figure 3.50 robomealmate navigation diagram
51 | P a g e
22. publish_voxel_map: True
23. origin_z: 0.0
24. z_resolution: 0.05
25. z_voxels: 16
26. max_obstacle_height: 2.0
27. mark_threshold: 0
28. observation_sources: scan
29. scan:
30. topic: /scan
31. max_obstacle_height: 2.0
32. clearing: True
33. marking: True
34. data_type: "LaserScan"
35. static_layer:
36. map_subscribe_transient_local: True
37. always_send_full_costmap: True
38. local_costmap_client:
39. ros__parameters:
40. use_sim_time: True
41. local_costmap_rclcpp_node:
42. ros__parameters:
43. use_sim_time: True
▪ update_frequency: the rate (in Hz) at which the local costmap is updated with sensor
data in our configuration 5 times per second.
▪ publish_frequency: the rate (in Hz) at which the costmap is published to topics for
visualization in our case 2 times per second.
▪ global_frame: the reference frame for the costmap. odom is used for a dynamic,
robot-relative frame to account for movement.
▪ robot_base_frame: the robot's physical reference frame. In our case base_link.
▪ rolling_window: enables a moving window centered on the robot, rather than a static
map. This is ideal for local navigation.
▪ width & height: dimensions (in meters) of the local costmap we set it to be 3m * 3m.
▪ resolution: grid resolution in meters per cell resolution of 0.05 means each cell
represents 5 cm.
▪ robot_radius: the robot's radius in meters helps define safe clearance around the robot
and our robot radius is 30 cm.
▪ plugins: specifies the costmap layers to load a voxel layer and an inflation layer are
used.
▪ cost_scaling_factor: determines how rapidly the cost decreases as the distance from
an obstacle increases. Higher values create sharper gradients.
▪ inflation_radius: the maximum distance (in meters) from an obstacle where inflation
occurs.
▪ plugin: defines the layer type as a voxel layer for 3D obstacle representation.
▪ enabled: enables the voxel layer.
▪ publish_voxel_map: publishes the voxel grid for debugging and visualization.
▪ origin_z: the Z-coordinate of the costmap’s base.
▪ z_resolution: vertical resolution 5 cm per voxel.
▪ z_voxels: The number of voxels along the Z-axis, allowing for a 3D map height of
z_resolution * z_voxels = 0.05 * 16 = 0.8m.
▪ max_obstacle_height: obstacles above 2m are ignored.
52 | P a g e
▪ mark_threshold: minimum number of hits required for a voxel to be marked as an
obstacle.
▪ observation_sources: Specifies the sensor used for obstacle detection it’s a LiDAR
sensor (scan).
▪ topic: the ROS topic to subscribe to for laser scan data.
▪ max_obstacle_height: obstacles above this height (2m) are ignored.
▪ clearing: enables the removal of obstacles from the costmap when they are no longer
detected.
▪ marking: enables marking detected obstacles on the costmap.
▪ data_type: specifies the type of sensor data.
▪ static_layer: used for incorporating a static map into the costmap.
▪ map_subscribe_transient_local: ensures the subscription to static map data persists
across node lifecycle changes.
For more information about costmap configuration see the resources. [2].
3.2.5.3.2 Global Costmap Configuration
1. global_costmap:
2. global_costmap:
3. ros__parameters:
4. update_frequency: 1.0
5. publish_frequency: 1.0
6. global_frame: map
7. robot_base_frame: base_link
8. use_sim_time: True
9. robot_radius: 0.3
10. resolution: 0.05
11. track_unknown_space: true
12. plugins: ["static_layer", "obstacle_layer", "inflation_layer"]
13. obstacle_layer:
14. plugin: "nav2_costmap_2d::ObstacleLayer"
15. enabled: True
16. observation_sources: scan
17. scan:
18. topic: /scan
19. max_obstacle_height: 2.0
20. clearing: True
21. marking: True
22. data_type: "LaserScan"
23. static_layer:
24. plugin: "nav2_costmap_2d::StaticLayer"
25. map_subscribe_transient_local: True
26. inflation_layer:
27. plugin: "nav2_costmap_2d::InflationLayer"
28. cost_scaling_factor: 3.0
29. inflation_radius: 0.55
30. always_send_full_costmap: True
▪ track_unknown_space: it allows the costmap to represent and track unknown space.
▪ static_layer: it loads the entire static map of the environment.
▪ obstacle_layer plugin is also included here but is not present in the local costmap,
which uses the voxel_layer for 3D obstacle representation.
▪ plugin: "nav2_costmap_2d::StaticLayer": loads the static map for global navigation.
53 | P a g e
▪ map_subscribe_transient_local: ensures reliable subscription to the static map topic,
even for transient or late-joining nodes.
Below are the differences between local and global costmap:
Feature Local Costmap Global Costmap
Purpose Local path planning Global path planning
Frame odom map
Update Frequency Higher Lower
Coverage Immediate surroundings Entire map
Window Size Small Full map size
Plugins voxel_layer, inflation_layer static_layer, obstacle_layer, inflation_layer
Dynamic/Static Dynamic (rolling window) Static (entire map)
Sensors Real-time Static map + sensors
Table 3.2 differences between local and global costmap
For more information about costmap configuration see the resources. [2] .
3.2.5.3.3 Planner Server Configuration
1. planner_server:
2. ros__parameters:
3. expected_planner_frequency: 20.0
4. use_sim_time: True
5. planner_plugins: ["GridBased"]
6. GridBased:
7. plugin: "nav2_navfn_planner/NavfnPlanner"
8. tolerance: 0.5
9. use_astar: false
10. allow_unknown: true
▪ expected_planner_frequency: the frequency in Hz at which the planner is expected to
be called 20.0 Hz means the planner is expected to generate plans up to 20 times per
second.
▪ "nav2_navfn_planner/NavfnPlanner": NavfnPlanner is a grid-based planner that uses
Dijkstra's algorithm by default (or A* if enabled), it generates a path from the robot’s
position to the goal using the global costmap.
▪ tolerance: specifies the tolerance (in meters) for achieving the goal position, if the
robot is within 5 cm of the goal, the planner considers the goal reached, and it can be
adjusted as needed.
For more information about planner server configuration see the resources. [3].
3.2.5.3.4 Controller server configuration
1. controller_server:
2. ros__parameters:
3. use_sim_time: True
4. controller_frequency: 20.0
5. controller_plugins: ["FollowPath"]
6.
7. FollowPath:
8. plugin: "dwb_core::DWBLocalPlanner"
54 | P a g e
9. debug_trajectory_details: False
10.
11. # Velocities
12. min_vel_x: 0.0
13. max_vel_x: 2.5 # Increase to 0.7 or 1.0 if hardware permits
14. min_vel_y: 0.0
15. max_vel_y: 0.0 # 0 for diff-drive
16. max_vel_theta: 7.0
17. min_speed_xy: 0.0
18. max_speed_xy: 2.5 # match max_vel_x for diff-drive
19. min_speed_theta: 0.0
20.
21. # Accelerations
22. acc_lim_x: 3.5
23. decel_lim_x: -3.5
24. acc_lim_theta: 6.0
25. decel_lim_theta: -6.0
26.
27. # Trajectory sampling
28. vx_samples: 20
29. vtheta_samples: 20
30. sim_time: 1.5
31. linear_granularity: 0.05
32. angular_granularity: 0.025
33. transform_tolerance: 0.2
34.
35. # Tolerances
36. xy_goal_tolerance: 0.25
37. trans_stopped_velocity: 0.2
38.
39. # Critic configuration
40. critics: ["RotateToGoal", "Oscillation", "BaseObstacle",
41. "GoalAlign", "PathAlign", "PathDist", "GoalDist"]
42. BaseObstacle.scale: 0.05
43. PathAlign.scale: 32.0
44. PathAlign.forward_point_distance: 0.1
45. GoalAlign.scale: 24.0
46. GoalAlign.forward_point_distance: 0.1
47. PathDist.scale: 32.0
48. GoalDist.scale: 24.0
49. RotateToGoal.scale: 32.0
50. RotateToGoal.slowing_factor: 1.5
51. RotateToGoal.lookahead_time: -1.0
52. short_circuit_trajectory_evaluation: True
53. stateful: True
It generates velocity commands to drive the robot along the planned global path.
▪ FollowPath: it represents a local planner for trajectory following.
▪ DWBLocalPlanner: is the Dynamic-Window Approach-based local planner.
▪ min_vel_x: the minimum forward velocity for the robot in meters per second.
▪ max_vel_x: the maximum forward velocity in meters per second.
▪ max_vel_theta: the maximum angular velocity in radians per second for rotation.
▪ acc_lim_x: maximum forward acceleration in meters per second squared.
▪ decel_lim_x: maximum deceleration in meters per second squared.
▪ acc_lim_theta: maximum angular acceleration in radians per second squared.
▪ decel_lim_theta: maximum angular deceleration in radians per second squared.
▪ xy_goal_tolerance: the distance tolerance (in meters) to the goal.
▪ critics: A list of cost function critics that evaluate trajectories based on different
criteria, critics guide the robot to generate safe, efficient, and goal-aligned trajectories.
55 | P a g e
✓ RotateToGoal: ensures the robot faces the goal before driving.
✓ Oscillation: penalizes oscillatory behaviors.
✓ BaseObstacle: avoids obstacles in the local costmap.
✓ GoalAlign: ensures alignment with the goal.
✓ PathAlign: ensures alignment with the path.
✓ PathDist: minimizes the distance from the path.
✓ GoalDist: minimizes the distance to the goal.
For more information about the controller server see the resources. [4], for the DWB
controller to see the resources [5].
Now we are ready to test the navigation.
3.2.5.4 Navigation (live navigation)
Live navigation means using sensor data for map creation and same time for costmap
creation, adding the costmap for unknown areas while adding them to the man map.
Launching steps:
1. Launch simulation file for robomealmate.
2. Launch slam toolbox.
3. Launch navigation by the following command:
1. ros2 launch nav2_bringup navigation_launch.py
params_file:=src/robomealmate/config/nav2_params.yaml use_sim_time:=true
note that we passed our param file as a parameter.
Below is a video showing the navigation process, note that in order to play the video click on
it.
Figure 3.51 robomealmate simulated navigation video
Now after we simulated robomealmate successfully we can assemble our hardware and build
our physical model.
https://www.youtube.com/embed/GooZI2-4dEM?feature=oembed
56 | P a g e
3.3 Hardware assembly
Since we have two kinds of controllers, high-level controller (Raspberry Pi 4), and low-level
controller (Arduino Mega) this section will be divided as follows:
1. Low-level controller assembly: in this subsection, we will show the connection and
the components of the low-level controller.
2. High-level controller assembly: in this subsection, we will show the connection and
the components of the high-level controller.
3. Power supply and buttons: in this subsection, we will show the battery connections
and button connections.
3.3.1 Low-level controller assembly
Our work here will be divided into:
1. Motors, encoders, and motor driver connections.
2. Ultrasonics connections.
3. Bluetooth HC-05 and RGB LED strip connection.
3.3.1.1 Motors, encoders, and motor driver connections
The simple form to make a motor spin is to apply an appropriate supply voltage but we have
no way to change the speed or direction and no way to control it remotely, so we have to add
a motor driver, part of the reason is we can’t just plug a motor straight into the controller if
that motor require a relatively high voltage and higher current compared to what the pins on
the controller chip can handle.
Instead, we use a motor driver this little board takes a low voltage low current signal from the
controller and uses the power supply to amplify it creating a higher voltage higher current to
drive the motors.
Figure 3.52 motor driver concept
57 | P a g e
Figure 3.53 PWM concept
This signal is usually PWM (pulse with modulation), a series of fast pulses you average out
over time, the ratio of on-time to off-time called the duty cycle determines the percentage of
the total voltage that a motor effectively sees, basically the shorter the pulses is the slower the
motor will go, the longer the pulse is the faster the motor will go, and if the pulses are on 100
of the time then the motor is going to be going full speed.
Now we need a way to generate that PWM input signal for that we use the controller, the
motor controller is a bit smarter its input is in a more practical format maybe a target speed or
Figure 3.54 motor controller concept
58