If you haven’t visited this site before, you may want to begin by exploring the posts in the following order
The Motivation behind Wimble Robotics
Wimble Robotics Development Blog
If you haven’t visited this site before, you may want to begin by exploring the posts in the following order
The Motivation behind Wimble Robotics
S is the name of my current robot and the robot is an evolutionary step towards building a device to act as a personal assistive device as I age.
There is unlikely to be an affordable service to provide for my care needs as I age, such as doing some household chores and some personal chores. So it seems my best bet is to invest in trying to invent a device that will do as much as I can make it do with the time and money I have available. I don’t expect a robot to be able to do everything I would like, but by enumerating some possible, valuable tasks, it will provide a vision to guide my invention. Tasks that would be useful include:
To perform any of those goals, I envision a slow evolution of capabilities for the robot. I have built several robots in the last few years, each one being built upon what I learned from previous robots and each one having a name starting with the next letter of the alphabet after the previous robot. This robot is called S and the previous robot is called Raven, giving you an idea of its evolutionary heritage. I will rename S at some point with a better name.
At the least, I expect S to be able to reliably move from where it is to an indicated spot in the house. Adding a gripper may involve significant mechanical redesign after required object weights and torques are determined. Designing a robot that only needs to fetch a beer is quite different from designing a robot that can load a dishwasher which is different from designing a robot that can pick me up from a chair. Sensor requirements will evolve as needs evolve. It is expected that mechanical needs will drive each rebuild of the robot more than software needs. Requirements like faster networks, or better network coverage throughout the house generally are solved with minor changes to the robot. Requirements to move a weight at the end of an arm without the robot tipping over are likely to be dominant mechanical concerns..
Of equal concern is the safety and trustworthiness of the robot. The predecessor to S, namely Raven, was largely an experiment in providing some safety features and providing some sense of trust. My robot will be useless to me if I cannot trust it to avoid smashing into my rather expensive glass walls or damage my one piece of expensive furniture or drop some hot liquid on me.
Some safety and trustworthiness issues for the robot include:
At a minimum, S is expected to be significantly improved over it’s predecessor in its reliability (it fails seldom), robustness (it can fail safely) and safety (it can be trusted not to do bad things), and be able to autonomously move to any place in the house that doesn’t require pushing the boundaries in an unreasonable way, such as needing to pass through a impossibly narrow passage or pass over an obstacle that would raise one of the wheels so it no longer could contact the floor.
Here is S as it exists as of today.
The frame is made from 2020 aluminum extrusions and aluminum plates. The open frame design allows me to somewhat easily get at the internal wiring, which still changes from time to time. There are a pair of 8 inch wheels (200 mm) mounted under the centerline of the bottom plate. A pair of caster wheels underneath prevent the robot from tilting too far forward or backward.
A large LiFePO4 battery is placed in the back of the bottom plate to weigh the robot so it sits on the back caster, tilting the robot slightly. The front caster provides just enough clearance to pass over floor rugs in the house without causing either drive wheel to lift off the ground. The use of two casters is intended to help stabilize the robot when a gripper is attached.
The power plate you see above, with the terminal barrier strip, holds nearly all of the DC to DC power converters and provides good ventilation, though the power converters never really warm up.
Those gray rectangles at the corners of the bottom frame hold time of flight sensors. Those black rectangles in the center of the crossbars in the middle of the frame hold SONAR sensors.
The robot consists of a bottom plate holding power components and sensors, and a top plate holding computation and communication components. Eventually another plate could be attached above the current top plate to hold an arm and a gripper.
To the right you can see the thick, vertical aluminum plate holding three DC to DC power converters and a terminal barrier strip (two of the converters can be seen from a different angle in a previous picture). The thick, vertical plate in the center holds two custom PC boards I made which are Teensy Monitor boards, and the RoboClaw motor controller. To the left you see the top of the power panel which has the on/emergency-off power switch, a power meter, a touch screen and a few other components. The blue rectangle in the back is the battery that powers the robot.
In this view of the top plate, you can see the pair of WiFi antennas to the left, then a USB hub to the right of that and then a plate holding an HDMI and USB breakout from the main computer. The plate allows me to hook up a monitor and keyboard/mouse to more easily program the main computer, which is an AMD 3700X 8-core computer on a Mini-ITX motherboard with 32GB of RAM and 512GB of NVME disk. Next to the HDMI/USB breakout are PC panel breakouts, such as the reset switch and power/disk activity lights.
Above the WiFi antennas, showing red and black wire connections, are a pair of Anderson Powerpole breakout boxes that distribute 5 volt and 12 volt power from the bottom plate to the top plate. The black cube in the upper right is an LD06 LIDAR. I previously used a pair of LIDARs, mounted at two corners of the robot at different heights, but currently I’m only using a single LIDAR mounted higher up.
The picture below gives a better view of the main computer and also the NVIDIA GeForce GT 1030 graphics card. To the right you see a pair of OAK-D stereo RGB and depth camera modules mounted on a plate that positions them 90 degrees apart, giving the robot a horizontal field of view of nearly 180 degrees.
Below the OAK-D cameras but not visible in the picture is an Intel Realsense T265 visual odometry camera. Currently I’ve given up trying to keep Intel’s software working on the Ubuntu 20.04 operating system. Intel’s software seems to frequently break when I accept software updates. Eventually, I may take the time to fix their software again, but it’s not needed for now.
The picture below shows the other side of the top plate. The rats nest of white cables provides the needed power to the Mini-ITX motherboard. Obscured under the white wires is a small silver box which is the final DC to DC power convertor that converts 12 volts to 3.3 volts for the motherboard.
The panel includes the on/emergency-off power switch, a 30 amp fuse and a meter showing the battery voltage, current draw and power draw as well as the battery charge. The battery temperature sensor is not hooked up to the battery at this time.
The top left holds a touch screen which is talked about elsewhere. Below that is a power cutoff switch for the RoboClaw motor controller, allowing me to run experiments, such as path planning, without any chance of the robot actually moving. The two red, push button switches below the motor power switch can be used to reset either of the two, custom Teensy Monitor boards, so I don’t have to try to reach my hand behind this panel in case either of the Teensy 4.1 microcomputers fails to program correctly. Finally, there is a panel mount for a pair of Anderson Powerpole connectors to provide the 42 volts power to charge the battery. Eventually the 42 volts will be supplied via some sort of docking connector.
The motors and wheel encoders are mounted underneath the bottom plate. The motors drive a right angle gearbox, allowing a better mounting position for the motors and offloading the weight of the robot from the motor shaft. Glued to each motor housing is an analog temperature sensor so the Teensy Monitor can check for motor overheating.
Where I remembered, I used shrink wrap, tubular markers to mark the various cables as I built them. Where I didn’t remember, I added wraparound labels to the cables after the connectors were added to the ends of the cables. I used Lever Nuts to connect high current wires. An example is the gray connector with the orange lever sticking out. Since there are 4 SONAR cables and 8 time of flight cables (as well as a few other non-power cables), I also crudely colored the cables with felt tip markers to make them more easily stand out when I’m trying to find where a cable is routed through a bundle of cables.
All of the power distribution cabling is documented using the WireViz software. An example of the documentation is in another paper. I try to keep the documentation up to date as I make changes. It lets me see exactly where each cable end attaches, and even lets me include pictures of the connectors or other devices.
Long cables are generally attached to the frame with self adhesive cable zip tie mounts and zip ties, similar to the following.
I’ve had many failures with the Dupont style connectors for signal cables to the RoboClaw itself. These connectors don’t offer much friction and fall off rather easily with a vibrating robot. On order to provide a more reliable connection, I designed a simple PC board that connects to all of the terminal pins of the RoboClaw board, providing more friction because of the number of simultaneous pins connected to a single, rigid PC board. It also gives me locking connectors to the signals from the two motor encoders and breaks out test points for various signals so I can test for electrical noise and provide shielding as needed. I still need to provide an even more robust connection in the future, but this works well enough for now.
A fair amount of progress as been made on Raven over the last few days.
I have list of TODO items that need to be tackled. One of those items has been on the list or months and this week it bubbled to the top. My proximity sensor values were being time stamped with old values.
In ROS (Robot Operating System), the whole software stack that tries to generate commands to move the robot somewhere interesting relies on a flood of data coming in, like SONAR readings, time of flight readings and LIDAR readings. Each of those readings comes with a timestamp indicating when the reading was taken.
On my robot, the readings are made on a custom PC board I made, and then sent to the main computer of the robot. The custom board needs to have the same sense of “now”-ness as the main PC — the two computers have to agree within a few milliseconds on what time it is now. It doesn’t do any good to take a SONAR reading and then have it show up a half second late to the main computer. The algorithms all have a small “tolerance” for time skew, and if sensor readings are too old, they are rejected out of hand as useless.
The software piece I use to transmit my sensor values from the custom board to the main computer is a package called “Micro ROS”. I use this software with some small customization as my custom board has a lot more capability to it that the usual, intended board using Micro ROS.
One function built into Micro ROS is the ability to ask another computer what time it is, and set its own clock to match that other computer. But setting it just once doesn’t work. Each computer has a clock driven by a crystal-controlled oscillator, and crystals drift as they heat up. Worse, much worse, the CPU clock chip in my custom board seems to “burp” now and then when interrupt signals come in and my hardware generates a fair number of interrupt signals.
Another problem is that Micro ROS has baked into it an expectation that the main computer is using a particular version of communication software, and the expected version currently has bugs which cause my stereo cameras to not operate correctly. It took a bit of reading for me to realize that little factoid.
For the moment, I can ignore that, so I set my main computer back to using the buggy communication software. Also, when Micro ROS asks for the current time, it takes “quite a bit of time” to get the result, usually about 150 milliseconds, but sometimes as much as a second. So any answer it gets from the main PC will be inherently wrong right away.
My last few days of programming have been devoted to finding someway to make that all work with an allowable tolerance for errors in timing. I tried over and over, and I’m normally very good at finding creative solutions to tricky problems. Still my code got progressively worse the more I tried to fix the problem. And then, my robot buddy Ralph Hipps called for one of our at-least-daily robot chats and in the process of explaining the problem to him, it occurred to me what the root cause was. My custom board was attempting to do a complex operation during interrupt handling.
Interrupt handlers on a computer must be very quick. If your interrupt handler code takes more than about a millisecond, sometimes even only a few tens of microseconds, everything falls apart. And because I was explaining the symptoms to someone else, I finally realized that I was taking probably tens of milliseconds in my interrupt handler for the SONAR sensors.
Once I realized that, the fix was easy. The details aren’t too important, but I stashed the important information from the signal in a shared, global location and exited the interrupt handler. Then the normal, non-interrupt code picked up the stashed information for processing. Outside of the interrupt handler, you can largely take whatever time you want to process data. Sort of. ROS navigation depends heavily on high “frame rates” from sensors. If you can’t generate, say, LIDAR data 10 times a second or faster, you have to slow your robot way down, otherwise the robot would move faster than it could react to, say, the cat rearing up in front of your robot with nunchucks in hand.
The robot is now happily sending sensor data nearly three times faster than before, about 80 frames a second and rarely gets the time out of sync by very much. Now I can move on to the new problems that have shown up because I fixed this problem.
Below is an obligatory snap shot of a part of the visualization software showing my robot, in a room, with LIDAR, SONAR and time of flight sensors showing as well. No cat is in this picture.
Everything about robots is hard (TM)
I’ve spent the last week adding two smart cameras to my robot which currently provide 6 regular camera images and produce two point clouds (the grey structures in the image below). When I first got them to work and turned on the depth computation, I hadn’t charged the battery on the robot in some time and the additional power sucked in for the depth computation caused the battery to die and the whole robot turned off.
The pair of cameras draw an additional 60 watts of power right now and I haven’t even turned on the artificial intelligence computation yet. Well, I don’t think I have. The cameras provide a lot of capability and I’ve had to rewrite the manufacturer’s sample code to get the ability to run two cameras at once on the same robot, and I’m not sure yet of what features I’ve enabled besides depth calculation and stereo vision.
The text at the very bottom of the picture shows that when I run the base robot code and enable the visualizer (rviz2) software, 3/4 of the computer’s capability is being consumed. Enough that it makes it hard to do anything else on the computer, like compose this message while the robot software is running.
Progress is good.
I have made several attempts over the last few years to get my robot to work under simulation in Gazebo, and I finally have Raven working. Well, it works under Gazebo, but I will still need to work out how to make it work in real life again, but that should be easier. For now, I had to remove the T265 in simulation as it sat in the frame tree as the provider of the odom topic and transformation, which is now done by Gazebo, so I will probably need to make some separate URDFs to be used for simulation vs non simulation.
Thanks go out to Dillo for providing gazebo simulations for the OAK-D cameras I use. I started with his work at:
tenacity_rover/tenacity_description/urdf at 0945115d723c82f1d72bb4542ab3540b5a3fad10 · jetdillo/tenacity_rovergithub.com |
And made changes for it to work under Humble and with more than one camera on a robot. I can provide feedback back to Steve if he wants to consider my changes, which are not very big.
Thanks also to the folks at Articulated Robotics, who have produced some of the best videos I’ve seen on using Ros 2, and I’ve seen an awful lot of videos on Ros 2. This video helped me to finally get the small details right for simulation:
And here is a picture of my robot in a small world, showing in Rviz2 and in Gazebo:
My latest robot, Raven, now has two custom PC boards in it, each containing a Teensy 4.1 processor. One board handles all the proximity sensors and does a few other chores, the other manages the touch screen and senses a few other values, such as the battery voltage, and does some other control functions. As with any USB device plugged into Linux, however, the name that gets assigned to the device under the /dev directory can change every time you power up the Linux computer or merely unplug and then plug the USB device back in. Fortunately, the manufacture of the Teensy 4.1 devices supplies a unique serial number to each device, and I use that attribute value to create custom udev rules that create new symbolic names under the /dev directory that are constant. One name, /dev/teensy_sensor will always point to the correct /dev/ttyUSBX device (where X is some digit), no matter how Linux renames it when it discovers the device. And /dev/teensy_screen will always point to the other device. Any script or code I create that wants a device name for communicating with the Teensy 4.1 device can use the appropriate name I created via a custom udev rule.
The default udev rule file supplied by the manufacture of the Teensy 4.1 device does not provide any static /dev name if you plug in multiple devices. The default usb device names of, e.g., /dev/ttyACM0, /dev/ttyACM1, etc. will point to different Teensy devices every time you power up or unplug then plug in your Teensy device. What you want is a symlink providing a name of your choosing that reliably points to the correct Teensy device.
If you use, e.g., the lsusb command, you might see something like the following, where lines that don’t refer to the Teensy devices are replaced with ellipses:
...
Bus 001 Device 108: ID 16c0:0483 Van Ooijen Technische Informatica Teensyduino Serial
,,,
Bus 001 Device 111: ID 16c0:0483 Van Ooijen Technische Informatica Teensyduino Serial
...
You could then look at more details of a specific device with a command such as “lsusb -v -s 001:108“, where “-v” asks for verbose output and the pair of colon-separated numbers after “-s” specify the bus and device of the specific USB device you want to inspect. The beginning of the resulting output might look something like:
Bus 001 Device 108: ID 16c0:0483 Van Ooijen Technische Informatica Teensyduino Serial
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 239 Miscellaneous Device
bDeviceSubClass 2
bDeviceProtocol 1 Interface Association
bMaxPacketSize0 64
idVendor 0x16c0 Van Ooijen Technische Informatica
idProduct 0x0483 Teensyduino Serial
bcdDevice 2.80
iManufacturer 1 Teensyduino
iProduct 2 USB Serial
iSerial 3 8771250
...
Where the trailing lines are replaced in the above with an ellipsis.
To create a udev rule specific one Teensy 4.1 device, you need to add a specifier to the rule that qualifies in the iSerial value. In the default udev file provided by the manufacture, namely the file at /etc/udev/rules.d/00-teensy.rules, you will see a line like:
ATTRS{idVendor}=="16c0", ATTRS{idProduct}=="04[789a]*", ENV{MTP_NO_PROBE}="1"
Replace that line with multiple lines that include a qualifier on the serial number, and adding in a SYMLINK action. Also add the KERNEL qualifier at the beginning of the rule, else when you reprogram the Teensy, it will link to a different class of device. For example:
KERNEL=="ttyACM*", ATTRS{idVendor}=="16c0", ATTRS{idProduct}=="04[789a]*", ATTRS{serial}=="8771250", ENV{MTP_NO_PROBE}="1", SYMLINK+="teensy_sensor"
KERNEL=="ttyACM*", ATTRS{idVendor}=="16c0", ATTRS{idProduct}=="04[789a]*", ATTRS{serial}=="12135850", ENV{MTP_NO_PROBE}="1", SYMLINK+="teensy_screen"
With the above, my Teensy 4.1 device with an iSerial value of 8771250 will have a symlink named /dev/teensy_sensor that points to the correct /dev/ttyACMX device (where X is a digit), no matter which X value Linux assigns to the device when it is plugged in. Likewise, my other Teensy 4.1 device, with a different serial number, will always get a symlink named /dev/teensy_screen that I can use as a device name in any code or script that would normally want a device name like, e.g., /dev/ttyACM0.
Note that even though the attribute is named iSerial when shown by lsusb, the qualifier name in the udev rule must use serial as the name.
While my robot “Raven” uses MicroRos to communicate between my TeensyMonitor board, which is based on a Teensy 4.1 device, there are times when software faults happen and the information I need to debug the fault doesn’t get transmitted as a ROS message. To overcome that, I have a software module on the TeensyMonitor board that logs a lot of information to an SD memory card. After a fault is detected, I can pull that memory card off from the TeensyMonitor board and plug it into another computer and look at the text log files for diagnostic information.
A problem with the built-in SD card on the Teensy 4.1, using the Arduino SD library is that the availableForWrite function always returns zero, so the example code for non-blocking I/O doesn’t work. This means that writes to the card necessarily block until complete. In a robot, any operation that takes a long time can be hazardous to the safety of the robot. In my library, while not eliminating the problem I attempt to reduce the problem by writing data in 4K byte chunks to the SD card. The result is that my average loop calls per second reduces from around 200 loop calls per second to about 175. This, of course, is specific to the work I do per loop call, which involves reading over a dozen sensors and sending as well as receiving messages via MicroRos.
A previous article explained my use of TModule in my software stack, which injects software module performance monitoring into each of my modules, so I won’t explain that framework again here. To use this logging module, you initialize it simply by including one line of code in you main “.ino” file, namely
TSd& sd = TSd::singleton();
Then you add lines to the log file via a call like:
TSd::singleton().log("INFO [MyModule::myFunction] Some message here");
Which will write to the log file a line similar to:
[0000111.428] INFO MyModule::myFunction] Some message here
Where the number in brackets is the number of milliseconds that have elapsed since the Arduino started up.
Here is the header file, “tsd.h” for the module:
#pragma once
#include <Regexp.h>
#include <SD.h>
#include "tmodule.h"
// A class used to write logging information to an SD memory card.
//
// On setup, the SD card is examined for any existing files with names like
// "LOG12345.TXT", where "12345" is a 5-digit serial number. A new log file
// will be created on the card with a name having a serial number of one higher
// than any existing log file name,or LOG00001.TXT if no existing log files
// were found.
//
// Every time TSd::singleton().log() is called, the message is added to a
// string buffer. When the buffer becomes full enough, it is written as one
// large chunck of text to the log file. This is done so that lots of little
// writes are done, which would slow down the outer loop of the Arduino device.
//
// The downside is that if the card is pulled from the Arduino device, the last
// chunk of text won't have been written to the device. Currently, there is a fair
// number of things written to the log file so if you just wait a few seconds before
// pulling the card, the interesting thing you were looking for in the log file may
// have been actually successfully written to the file before you pulled the card.
//
// This is intended to be used the the TModule software module as part of the
// TeensyMonitor stack. So all you need to do to instantiate and get this module
// going is to call TSd::singleton() in your ".ino" file before calling TModule::setup().
//
// If a write to the card fails, perhaps because the card is full or the card has been
// pulled from the Arduino device, further writes are not attempted until the system
// is restarted.
class TSd : TModule {
public:
// Write message to log file.
void log(const char* message);
// Singleton constructor.
static TSd& singleton();
protected:
// From TModule.
void loop();
// From TModule.
virtual const char* name() { return "TSd"; }
void setup();
private:
// Private constructor.
TSd();
static void regexpMatchCallback(const char* match, const unsigned int length,
const MatchState& matchState);
// Used to hold a big chunk of data so writes are fewer.
String data_buffer_;
// The SD card device.
static SDClass g_sd_;
// Used to find to highest log file number already on the SD card.
static int g_highestExistingLogFileNumber_;
// Has SD device been properly initialized?
static bool g_initialized_;
// The file handle for the log file on the SD card device.
static File g_logFile_;
// Singleton instance.
static TSd* g_singleton_;
};
Here the the body file, “tsd.cpp”
#include "tsd.h"
#include <Arduino.h>
#include <Regexp.h>
#include <SD.h>
#include <stdint.h>
// #include "tmicro_ros.h"
void TSd::log(const char* message) {
static const unsigned int kChunkSize = 4096;
if (g_initialized_) {
char log_message[256];
uint32_t now = millis();
snprintf(log_message, sizeof(log_message), "[%07ld.%03ld] %s\n", now / 1000,
now % 1000, message);
data_buffer_ += log_message;
if (data_buffer_.length() >= kChunkSize) {
size_t bytes_written = g_logFile_.write(data_buffer_.c_str(), kChunkSize);
if (bytes_written > 0) {
g_logFile_.flush();
} else {
// Assume the card has been removed or is failing.
g_initialized_ = false;
}
data_buffer_.remove(0, kChunkSize);
}
}
}
void TSd::loop() {}
void TSd::regexpMatchCallback(const char* match, const unsigned int length,
const MatchState& matchState) {
char regexMatchString[10]; // Big enough to hold 5-digit file serial number.
matchState.GetCapture(regexMatchString, 0); // Get 0-th match from regexp.
int logSerialNumberAsInt = atoi(regexMatchString);
if (logSerialNumberAsInt > TSd::g_highestExistingLogFileNumber_) {
TSd::g_highestExistingLogFileNumber_ = logSerialNumberAsInt;
}
}
void TSd::setup() {
data_buffer_.reserve(8192);
g_highestExistingLogFileNumber_ = 0;
g_initialized_ = false;
if (!g_sd_.begin(BUILTIN_SDCARD)) {
// ERROR Unable to access builtin SD card.
} else {
File rootDirectory = g_sd_.open("/");
while (true) {
File nextFileInDirectory = rootDirectory.openNextFile();
if (!nextFileInDirectory) break;
char fileName[256];
strncpy(fileName, nextFileInDirectory.name(), sizeof(fileName));
MatchState matchState;
matchState.Target(fileName);
matchState.GlobalMatch("LOG(%d+).TXT", regexpMatchCallback);
}
char
newLogFileName[20]; // Big enough to hold file name like: LOG12345.TXT.
sprintf(newLogFileName, "LOG%05d.TXT", ++g_highestExistingLogFileNumber_);
g_logFile_ = g_sd_.open(newLogFileName, FILE_WRITE);
if (!g_logFile_) {
char diagnosic_message[128];
snprintf(diagnosic_message, sizeof(diagnosic_message),
"ERROR [TSd::setup] Unable to create new log file: '%s'",
newLogFileName);
} else {
g_initialized_ = true;
;
}
}
}
TSd::TSd() : TModule(TModule::kSd) {}
TSd& TSd::singleton() {
if (!g_singleton_) {
g_singleton_ = new TSd();
}
return *g_singleton_;
}
SDClass TSd::g_sd_;
int TSd::g_highestExistingLogFileNumber_ = 0;
bool TSd::g_initialized_ = false;
File TSd::g_logFile_;
TSd* TSd::g_singleton_ = nullptr;
My time-of-flight proximity sensors of my Raven (formerly Puck) robot use the VL53L0X chips and are fairly noisy. I wanted to have a quick way to visualize just how noisy they were. Here is the graph I produced:
The two rows represent the front versus back of the robot frame. The four columns represent the two pairs of sensors for the left and right sides of the robot frame. At each corner of the robot frame, there is a time-of-flight sensor that is pointed outward to the side (left or right) and another that is pointed either forward or backward. Each histogram is showing normalized counts per distance over the last sampling period.
As you can see from the histograms, some of the sensors are fairly noisy. This is with the robot not moving and nothing nearby is moving. Readings at 0 mm should be ignored as I moved all readings that were beyond 2 meters into the 0 meter bucket as they are uninteresting. The histogram in the third column of the first row shows a fairly wide variance in readings, while the histogram in the last column of the last row show a pretty narrow variance.
Below is the python code used to generate the plots. It includes code to read the ROS 2 messages generated by my custom monitor computer and uses matplotlib to do the display.
from datetime import datetime
import rclpy
from rclpy.node import Node
from rclpy.qos import QoSProfile, QoSReliabilityPolicy, QoSHistoryPolicy
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
from sensor_msgs.msg._range import Range
from std_msgs.msg import String
class MinimalSubscriber(Node):
def __init__(self):
super().__init__('minimal_subscriber')
self.start_time = datetime.now()
self.callback_count = 0
# These are the title strings shown per sensor histogram.
# They are a shortcut, for me, as to the physical position of the corresponding sensor.
# The 'X" shows where the sensor is located.
# The '<' and '>' show whether the sensor is on the left or right side of the robot frame.
# The '^' and 'v' show whether the sensor is on the front or rear of the robot frame.
self.sensor_names = [['X<^-', '-<^X', 'X^>-', '-^>X'],
['X<v-', '-<vX', 'Xv>-', '-v>X']]
self.number_sensors = 8 # There are 8 sensors.
self.number_values_to_cache = 20 # I want to show the variance over this number of the last readings.
self.last_n_values_per_sensor = np.zeros(
(self.number_sensors, self.number_values_to_cache), dtype='float')
self.next_index_number = np.zeros((self.number_sensors), dtype='int32')
# Create an array of histograms.
# Two rows for front vs back of robot.
# Four columns for left-sideways, left-front-or-back, right-front-or-back, right-sideways position.
self.figure, self.axis = plt.subplots(
nrows=2, ncols=4, sharex=False, sharey=False, squeeze=False, figsize=(8, 2))
# Set the window title.
self.figure.canvas.set_window_title('Time Of Flight Sensors step')
# Create the x-axis values. I'm interested in only ranges from 0.00 to 1.00 meters.
self.bins = [x / 100.0 for x in range(100)]
# Make it all look pretty.
plt.subplots_adjust(hspace=0.6)
plt.autoscale(enable=True, axis='both', tight=True)
plt.rcParams['lines.linewidth'] = 1
# Set up the ROS 2 quality of service in order to read the sensor data.
qos_profile = QoSProfile(
reliability=QoSReliabilityPolicy.BEST_EFFORT,
history=QoSHistoryPolicy.KEEP_LAST,
depth=1
)
# Subscribe to the sensor topics.
for sensor_number in range(8):
self.subscription = self.create_subscription(
Range,
'/tof{s}Sensor'.format(s = sensor_number),
self.listener_callback,
qos_profile,
)
# Set up the 8 histogram formats and titles.
self.patches = [1, 2, 3, 4, 5, 6, 7, 8]
for row in range(2):
for col in range(4):
n, bins, patches = self.axis[row][col].hist(
self.last_n_values_per_sensor[row][col], self.bins, histtype='bar')
self.patches[(row * 4) + col] = patches
self.axis[row, col].set_title(
self.sensor_names[row][col], fontsize=8, fontfamily='monospace')
# Let's go.
plt.ion()
plt.show()
self.subscription # prevent unused variable warning
# Process a time-of-flight sensor message of type Range.
def listener_callback(self, msg):
self.callback_count = self.callback_count + 1
sensor_number = int(msg.header.frame_id[-1]) # Get the sensor number.
range_value = msg.range
if (range_value > 2.0):
# If the range is greater than 2 meters, ignore it by setting it to zero.
range_value = 0
# Capture the last readings of the sensor in a ring buffer.
self.last_n_values_per_sensor[sensor_number][self.next_index_number[sensor_number]] = range_value
if (self.callback_count % 24) == 0:
# Peridically update the plots.
for s in range(8):
# For each sensor, create a histogram.
data = self.last_n_values_per_sensor[s]
n, _ = np.histogram(data, self.bins, density=True)
max = n.max()
for count, rect in zip(n, self.patches[s]):
rect.set_height(count / max) # Normalize the height of the rectangle.
self.figure.canvas.draw()
self.figure.canvas.flush_events()
# Print out the frames per second of sensor data for all 8 sensors since the last plot update.
# Divide by 8 if you want to know the frames per second per sensor.
duration = datetime.now() - self.start_time
fps = self.callback_count / (duration.seconds + (duration.microseconds / 1000000.0))
print("callback_count: %d, duration: %f, fps: %3.2f" % (self.callback_count, (duration.seconds + (duration.microseconds / 1000000.0)), fps))
# Update the ring buffer index.
self.next_index_number[sensor_number] = self.next_index_number[sensor_number] + 1
if self.next_index_number[sensor_number] >= self.number_values_to_cache:
self.next_index_number[sensor_number] = 0
def main(args=None):
rclpy.init(args=args)
minimal_subscriber = MinimalSubscriber()
rclpy.spin(minimal_subscriber)
minimal_subscriber.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Arduino-like processors are programmed with a setup and loop mechanism. The function setup is called once by the underlying firmware, and then loop is called repeatedly after that. I created a TModule class that I use to extend/replace/wrap those functions (choose how you want to think about it). This primarily gives me the ability to keep track of the performance of each piece of software that I write.
My robot uses a Teensy 4.1 microprocessor, and a custom PC board that I designed, into which that Teensy 4.1 is plugged in and which then extends all the I/O of the Teensy in a useful way. For instance, I use one of the Teensy 4.1 I2C channels along with an I2C multiplexer to talk to the eight time-of-flight sensors on the robot. The HC-SR04 SONAR sensors require level converters to translate the 3.3 volt I/O pins of the Teensy to the 5 volt pins of the SONAR sensors. And so on. My robot uses nearly every I/O pin on the Teensy.
The software on the Teensy, which I call the Teensy Monitor, is organized around modules. For instance, there is a SONAR module, a time-of-flight module, a module to talk to the RoboClaw motor controller, a module to communicate with the other processors on the robot using micro-ROS, and so on. For each of these modules, I need to see if there are any hardware failures and I need to see if they are performing as expected.
For example, it’s possible that one of the time-of-flight sensors isn’t working correctly. It’s possible that one of the motor current sensors isn’t responding. It’s possible that some module is taking too long to operate, which might then effect how often the main processor of the robot gets SONAR data, which might make the robot operate in an unsafe manner.
My TModule class handles the performance monitoring part of all of this.
The way you use TModule is that each software module is a class that derives from TModule. As each software module class is instantiated, the instance is added to a list of all instantiated modules. TModule has a DoSetup method that, when called, will call setup for each instantiated module. And TModule has a DoLoop method that, when called, will call the loop method for each instantiated module, but it wraps that loop call with code that will measure the duration of the loop call.
For each software module, statistics are capture for the minimum, maximum and average execution duration of the loop call. There is a method in TModule that returns a JSON string showing the statistics for every instantiated module. My module that uses micro-ROS has a method that periodically gets the statistics from TModule and sends that JSON string to the ROS system on the robot. Asking for the statistics from TModule also resets the statistics. So each JSON message shows the performance of every instantiated module since the last reported performance.
Here is what a JSON statistics report looked like at one point in my software development:
{
"Stats": {
"loops": 211,
"Ms": 999.8,
"mdls": [
{ "n": "Alrm", "MnMxAv": [0.0, 0.0, 0.0] },
{ "n": "uRos", "MnMxAv": [0.2, 1.6, 1.3] },
{ "n": "Amps", "MnMxAv": [1.0, 1.0, 1.0] },
{ "n": "Rlay", "MnMxAv": [0.0, 0.0, 0.0] },
{ "n": "Robo", "MnMxAv": [0.6, 1.2, 0.8] },
{ "n": "Sonr", "MnMxAv": [0.0, 0.0, 0.0] },
{ "n": "Temp", "MnMxAv": [0.0, 0.0, 0.0] },
{ "n": "ToFl", "MnMxAv": [1.6, 1.7, 1.6] }
]
}
}
This says that since the last reported statistics, the Arduino loop method was called 211 times and it took 999.8 milliseconds to execute those 211 calls. The 999.8 milliseconds period is because the micro-ROS module is set up to report statistics every second. This message lets me know that in the last second, the SONAR module was called 211 times, the time-of-flight module was called 211 times, and so on.
Each module subclass has a name method that returns a short string naming the module. That value shows after the “n” keyword above. So, the module that talks to the RoboClaw is named Robo, and over the last 211 times the RoboClaw module had its loop method called, the minimum execution time of a loop call was 0.6 milliseconds. The maximum loop call duration was 1.2 milliseconds, and the average call duration was 0.8 milliseconds. If I wanted to increase the overall number of loop calls per second, I should first look at improving the modules with the largest maximum or average duration, such as the micro-ROS loop method, called uROS in the report, as its average execution duration is 1.3 milliseconds, or the time-of-flight module, called ToFl in the report, as its average execution duration is 1.6 milliseconds.
The header file, tmodule.h is:
#pragma once
#include <stdint.h>
#include "Arduino.h"
/**
* Base class for a kind of module which:
* - Needs to be setup once via a call to DoSetup().
* - Needs to be invoked once per "processing cycle" via a call to DoLoop().
* - Generates performance statistics gathered about performance during loop
* execution.
*
* Usage:
* Each module exists as a singleton. Somewhere, modules are
* brought into existence by calling, typically, their
* singleton() method. As each module comes into existence,
* This TModule class keeps a list of them.
*
* After all modules are brought into existence, you call the
* DoSetup() method to invoke the setup() method for all registered
* modules. Then in the main processing loop, you invoke the
* DoLoop() method to invoked the loop() method for all registered
* modules.
*
* Periodically, some agent calls GetStatistics() to get a string showing
* execution performance of all the registered modules.
*/
class TModule {
public:
// A list of all possible modules.
typedef enum Module {
kMicroRos, // Make this the first module as it is used for diagnostic reporting.
kMotorCurrent,
kPanelSelector,
kRelay,
kRoboClaw,
kSonar,
kTemperature,
kTimeOfFlight,
kNumberModules // The number of all possible modules.
} Module;
// Call loop() for all registered modules.
static void DoLoop();
// Call setup() for all registered modules.
static void DoSetup();
// Get a JSON string showing module performance since the
// last call to GetStatistics.
static void GetStatistics(char* outString, size_t outStringSize);
protected:
TModule(Module moduleKind);
// Perform regular, cyclic work for the module.
virtual void loop() = 0;
// Return module name.
virtual const char* name() = 0;
// Perform one-time setup for the module.
virtual void setup() = 0;
private:
TModule();
// Define slots for gathering statistics for the module.
typedef enum Slot {
kMin,
kMax,
kSum,
kNumberSlots // Number of slots to reserve for statistics.
} Slot;
// Number of times DoLoop() was called.
static uint32_t total_do_loop_count_;
// A list of all registered modules.
static TModule* all_modules_[];
// Index to the next statistic reading for the module.
int loop_calls_between_get_statistics_calls_;
// Statistics gathered per registered modules.
float duration_stats_[kNumberSlots];
};
The body code file tmodule.cpp
#include "tmodule.h"
#include <Arduino.h>
#include <stdint.h>
#include "tmicro_ros.h"
TModule::TModule(TModule::Module moduleKind) {
all_modules_[moduleKind] = this;
loop_calls_between_get_statistics_calls_ = 0;
for (size_t i = 0; i < kNumberSlots; i++) {
duration_stats_[i] = 0.0;
}
}
void TModule::GetStatistics(char* outString, size_t outStringSize) {
static uint32_t statTimingStart = micros();
char statList[2048];
statList[0] = '\0';
for (size_t i = 0; i < kNumberModules; i++) {
if (all_modules_[i] != nullptr) {
TModule* module = all_modules_[i];
char temp[512];
temp[0] = '\0';
snprintf(temp, sizeof(temp),
"{\"n\":\"%-s\",\"MnMxAv\":[%-2.1f,%-2.1f,%-2.1f]},",
module->name(), module->duration_stats_[kMin],
module->duration_stats_[kMax],
module->duration_stats_[kSum] / total_do_loop_count_);
strcat(statList, temp);
module->loop_calls_between_get_statistics_calls_ = 0;
module->duration_stats_[kMin] = 10'000'000.0;
module->duration_stats_[kMax] = -10'000'000.0;
module->duration_stats_[kSum] = 0.0;
}
}
// Remove trailing comma from previous list.
if (strlen(statList) > 0) {
statList[strlen(statList) - 1] = '\0';
}
snprintf(outString, outStringSize,
"{\"loops\":%-ld,\"Ms\":%-2.1f,\"mdls\":[%-s]}",
total_do_loop_count_, ((micros() * 1.0) - statTimingStart) / 1000.0,
statList);
statTimingStart = micros();
total_do_loop_count_ = 0;
}
void TModule::DoLoop() {
for (size_t i = 0; i < kNumberModules; i++) {
if (all_modules_[i] != nullptr) {
TModule* module = all_modules_[i];
uint32_t start = micros();
all_modules_[i]->loop();
float duration = (micros() - start) / 1000.0;
module->duration_stats_[kSum] += duration;
if (duration < module->duration_stats_[kMin]) {
module->duration_stats_[kMin] = duration;
}
if (duration > module->duration_stats_[kMax]) {
module->duration_stats_[kMax] = duration;
}
module->loop_calls_between_get_statistics_calls_++;
}
}
total_do_loop_count_++;
}
void TModule::DoSetup() {
for (int i = 0; i < kNumberModules; i++) {
if (all_modules_[i] != nullptr) {
all_modules_[i]->setup();
}
}
}
TModule* TModule::all_modules_[TModule::kNumberModules + 1] = {
nullptr, nullptr, nullptr, nullptr, nullptr,
nullptr, nullptr, nullptr};
uint32_t TModule::total_do_loop_count_ = 0;
This is all driven by code in the main ino file for the Arduino-like system, where each module of interest is instantiated, then DoSetup and DoLoop are called as shown.
#include <stdint.h>
#include "tmicro_ros.h"
#include "tmodule.h"
#include "trelay.h"
#include "troboclaw.h"
#include "tsonar.h"
#include "ttemperature.h"
#include "ttime_of_flight.h"
// Initialize all TModule instances in any required order.
TMicroRos& micro_ros = TMicroRos::singleton();
TRelay& relay = TRelay::singleton();
TRoboClaw& roboclaw = TRoboClaw::singleton();
TSonar& sonar = TSonar::singleton();
TTemperature& temperature = TTemperature::singleton();
TTimeOfFlight& time_of_flight = TTimeOfFlight::singleton();
void setup() {
TModule::DoSetup();
}
void loop() {
TModule::DoLoop();
}
Handling multiple sensors on an Arduino-like processor, specifically a Teensy 4.1 in my case, can be a bit tricky if you want to get high loop rates.
In my robot, I’m using the Teensy to send data from 8 time-of-flight sensors, 4 SONAR sensors, 2 motor current sensors and 2 motor temperature sensors while also controlling a RoboClaw motor controller, an 8-channel relay board and a touch screen display. On top of this, it runs micro-ROS to send and receive data messages to the rest of the robot.
In this article I will talk only about the issue of using the HC-SR04, inexpensive SONAR sensors. Here is one article that gives you some background in what the sensor is and how it works.
For a robot that uses the ROS software stack, getting data at fast frame rates is crucial, otherwise the robot will have moved far along from where it was when the sensor data was gathered, so the data no longer represents the truth from the point where the robot is now.
It can take on order of 20 milliseconds to get a reading from the SONAR at the maximum distance. If you had a single sensor and read it once per loop, you could, in theory, send about 50 readings a second to your robot. With 4 sensors, however, if you sensed one SONAR after the other in a round-robin manner, the frame rate would drop down to about 50/4 or 12.5 frames per second, which may be too low a rate for your robot.
If your robot is moving at the rate of a slow walk for a person, about 1 mile per hour, that translates to 0.45 meters per second. At a frame rate of 12.5 frames per second, the robot will move 36 millimeters or about 1.4 inches between readings. If you are traveling along a narrow hallway or in a crowded room, that may not be good enough to avoid crashing into nearby obstacles. If you can get back up to nearly 50 frames per second, though, the robot will only travel about 9 millimeters or just over a third of an inch. Add in the other things your Arduino processor is doing besides reading SONAR sensors and frame rates could get much worse.
To get around this problem, you need to use interrupt handlers to control the SONAR sensors. Briefly, what I do is:
On a Teensy4.1, my code is publishing about 25 frames per second for each of the 4 SONAR sensors, or 100 frames per second alltogether. The sensors are spaced out in time, the readings don’t come altogether at once, as described above. Also, the whole code is driven by my TModule code module, which I’ll describe in another article, but you can take this code by itself and combine the two loop and setup methods together and run it without my TModule module.
Here is tsonar.h
#pragma once
#include <Wire.h>
#include <stdint.h>
#include "tmodule.h"
class TSonar : TModule {
public:
// Which SONAR sensor.
typedef enum Sonar {
kFront,
kRight,
kBack,
kLeft,
kNumberSonars // Number of SONAR sensors.
} Sonar;
// Get sensed range for the device. A value of < 0 => no sensor 'device'
// detected during setup.
int GetValueMm(Sonar device);
// Get the average range for the device.
float GetAverageValueM(Sonar device);
// Singleton constructor.
static TSonar& singleton();
protected:
// From TModule.‰
void loop();
// From TModule.‰
const char* name() { return "Sonr"; }
// From TModule.‰
void setup();
private:
// Number of readings to average together to reduce noise.
static const uint8_t kNumberReadingsToAverage = 4;
// Private constructor.
TSonar();
// Common interrupt handler.
static void CommonInterruptHandler(uint8_t pin, long& end_time,
long& start_time, uint8_t& average_index,
size_t sonar_index);
// Interrupt handler for per device echo.
static void Echo0InterruptHandler();
static void Echo1InterruptHandler();
static void Echo2InterruptHandler();
static void Echo3InterruptHandler();
static void TimerInterruptHandler();
// Next device being handled in the round-robin processing.
static uint8_t g_next_sensor_index_;
// Singleton instance.
static TSonar* g_singleton_;
// Last captured sensed distance for each SONAR device.
static int g_values_mm_[kNumberSonars];
// List of last values, for computing an average.
static int g_values_mm_history_[kNumberSonars][kNumberReadingsToAverage];
// Last captured average distance for each SONAR device.
static float g_average_value_m_[kNumberSonars];
// GPIO pins for controlling the SONAR sensors.
enum {
kPinEcho0 = 35,
kPinTrigger0 = 34,
kPinEcho1 = 37,
kPinTrigger1 = 36,
kPinEcho2 = 41,
kPinTrigger2 = 40,
kPinEcho3 = 15,
kPinTrigger3 = 14
};
// Microseconds per timer interrupt.
static const uint16_t kTimerPeriodUSec = 20;
// Desired milliseconds to wait between a low pulse and the next high pulse.
static const uint16_t kTimerSamplingPeriodMSec = 10;
// Timer interrupts to expire before a low pulse to high pulse transition is
// taken..
static const uint16_t kTimerCountsPerSamplingPeriod =
(kTimerSamplingPeriodMSec * 1000) / kTimerPeriodUSec;
// For converting SONAR echo time to millimeters.
static const float g_time_to_mm_scaler_;
};
And here is tsonar.cpp
#include "tsonar.h"
#include <stdint.h>
#include "Arduino.h"
#include "TimerOne.h"
#include "Wire.h"
#include "tmicro_ros.h"
void TSonar::CommonInterruptHandler(uint8_t pin, long& end_time,
long& start_time, uint8_t& average_index,
size_t sonar_index) {
switch (digitalRead(pin)) {
case HIGH:
// When the SONAR begins transmitting, the echo pin will
// go HIGH. Capture the start time.
end_time = 0;
start_time = micros();
break;
case LOW:
// When an echo is received, the echo pin will go LOW.
// Compute the distance to the object and send a ROS message.
end_time = micros();
g_values_mm_[sonar_index] =
(end_time - start_time) * g_time_to_mm_scaler_;
g_values_mm_history_[sonar_index][average_index++] =
g_values_mm_[sonar_index];
if (average_index >= kNumberReadingsToAverage) {
average_index = 0;
}
// Compute an average distance over the last few readings
// to help reduce the sensor noise.
int average_sum_mm = 0;
for (size_t i = 0; i < kNumberReadingsToAverage; i++) {
average_sum_mm += g_values_mm_history_[sonar_index][i];
}
g_average_value_m_[sonar_index] =
(average_sum_mm * 0.001) / kNumberReadingsToAverage;
// Publish the average distance for the sensor.
TMicroRos::PublishSonar(sonar_index, g_average_value_m_[sonar_index]);
break;
}
}
float TSonar::GetAverageValueM(Sonar device) {
return g_average_value_m_[device];
}
void TSonar::Echo0InterruptHandler() {
// Each sensor has it's own distance timer and it's own
// set of values used to compute an averate distance.
static long end_time = 0;
static long start_time = 0;
static uint8_t average_index = 0;
CommonInterruptHandler(kPinEcho0, end_time, start_time, average_index, 0);
}
void TSonar::Echo1InterruptHandler() {
static long end_time = 0;
static long start_time = 0;
static uint8_t average_index = 0;
CommonInterruptHandler(kPinEcho1, end_time, start_time, average_index, 1);
}
void TSonar::Echo2InterruptHandler() {
static long end_time = 0;
static long start_time = 0;
static uint8_t average_index = 0;
CommonInterruptHandler(kPinEcho2, end_time, start_time, average_index, 2);
}
void TSonar::Echo3InterruptHandler() {
static long end_time = 0;
static long start_time = 0;
static uint8_t average_index = 0;
CommonInterruptHandler(kPinEcho3, end_time, start_time, average_index, 3);
}
int TSonar::GetValueMm(Sonar device) {
if (static_cast<int>(device) >= kNumberSonars) {
return -1;
} else {
return g_values_mm_[static_cast<int>(device)];
}
}
void TSonar::loop() {
// Nothing needs to be done in the loop. Everything is drivven
// by interrupts.
}
void TSonar::setup() {
// Set up the Teensy pins to talk to the SONAR devices.
pinMode(kPinEcho0, INPUT);
pinMode(kPinTrigger0, OUTPUT);
pinMode(kPinEcho1, INPUT);
pinMode(kPinTrigger1, OUTPUT);
pinMode(kPinEcho2, INPUT);
pinMode(kPinTrigger2, OUTPUT);
pinMode(kPinEcho3, INPUT);
pinMode(kPinTrigger3, OUTPUT);
// Setup a timer to drive the state machine.
Timer1.initialize(kTimerPeriodUSec);
Timer1.attachInterrupt(TimerInterruptHandler);
// Attach an interrupt handler to each SONAR's echo pin.
attachInterrupt(kPinEcho0, Echo0InterruptHandler, CHANGE);
attachInterrupt(kPinEcho1, Echo1InterruptHandler, CHANGE);
attachInterrupt(kPinEcho2, Echo2InterruptHandler, CHANGE);
attachInterrupt(kPinEcho3, Echo3InterruptHandler, CHANGE);
}
void TSonar::TimerInterruptHandler() {
// The states of the state machine.
typedef enum {
COUNTDOWN, // Count down timer ticks.
PULSE_HIGH, // Begin transmitting the SONAR pulse.
PULSE_LOW // Stop transmitting the SONAR pulse and begin listening for the
// echo.
} TTimerState;
// The current state of the state machine.
static volatile TTimerState state = COUNTDOWN;
// Used to count timer pulses to drive the timing of each SONAR sensor.
static volatile long countdown = kTimerCountsPerSamplingPeriod;
if (--countdown == 0) {
// Time to start the next SONAR sensor in the list of sensors.
state = PULSE_HIGH;
countdown = kTimerCountsPerSamplingPeriod;
}
switch (state) {
case COUNTDOWN:
// Continue counting so that the sensors are spaced
// apart in time so they don't interfere with each other.
break;
case PULSE_HIGH:
// Time to send out the ping for the next SONAR in the list.
if ((g_next_sensor_index_ % 4) == 0) {
digitalWrite(kPinTrigger0, HIGH);
} else if ((g_next_sensor_index_ % 4) == 1) {
digitalWrite(kPinTrigger1, HIGH);
} else if ((g_next_sensor_index_ % 4) == 2) {
digitalWrite(kPinTrigger2, HIGH);
} else {
digitalWrite(kPinTrigger3, HIGH);
}
state = PULSE_LOW;
break;
case PULSE_LOW:
// Time to stop the ping output and begin listening for the
// echo for the next SONAR in the list.
if ((g_next_sensor_index_ % 4) == 0) {
digitalWrite(kPinTrigger0, LOW);
} else if ((g_next_sensor_index_ % 4) == 1) {
digitalWrite(kPinTrigger1, LOW);
} else if ((g_next_sensor_index_ % 4) == 2) {
digitalWrite(kPinTrigger2, LOW);
} else {
digitalWrite(kPinTrigger3, LOW);
}
g_next_sensor_index_++; // Control the next sensor in the list the next
// time around.
state = COUNTDOWN; // Begin counting down again.
break;
}
}
TSonar::TSonar() : TModule(TModule::kSonar) {
for (size_t i = 0; i < kNumberSonars; i++) {
// Initialize the array of readings used to computer the average distance
// per sensor.
for (size_t j = 0; j < kNumberReadingsToAverage; j++) {
g_values_mm_history_[i][j] = 0;
}
}
}
TSonar& TSonar::singleton() {
// A singleton pattern is used to support the TModule design pattern.
if (!g_singleton_) {
g_singleton_ = new TSonar();
}
return *g_singleton_;
}
uint8_t TSonar::g_next_sensor_index_ =
0; // Which sensor in the list is being controlled now.
TSonar* TSonar::g_singleton_ =
nullptr; // Used to ensure that only one instance of this class is ever
// constructed.
int TSonar::g_values_mm_[TSonar::kNumberSonars] = {
-1, -1, -1, -1}; // The last reading per sensor.
int TSonar::g_values_mm_history_
[TSonar::kNumberSonars]
[TSonar::kNumberReadingsToAverage]; // List of readings per sensur used to
// computer an average per sensor.
float TSonar::g_average_value_m_[TSonar::kNumberSonars] = {
0, 0, 0, 0}; // The computed average distance per sensor.
const float TSonar::g_time_to_mm_scaler_ =
(10.0 / 2.0) / 29.1; // For converting time to distance.
Don’t forget to include a footprint in your Navigation 2 (Nav2) configuration file so that the local and global cost maps can properly account for the outline of the robot when it is attempting to avoid collisions. You need a footprint for both the local and global cost maps. My cost map parameters are store in a file called “nav2_params.yaml“, and within it there is a section for “local_costmap” and one for “global_costmap“. Here is snippet from the configuration file showing the footprint line for the local cost map for my robot.
local_costmap:
local_costmap:
ros__parameters:
footprint: "[[-0.450, -0.215], [-0.450, 0.215], [0.235, 0.215], [0.235, -0.215]]"
And there is a similar “footprint” line in the global_costmap section.
The footprint value must be enclosed in quotes, has an outer pair of square brackets, and then is a list of x and y comma-separated coordinate pairs, each pair enclosed in square brackets and each bracketed pair separated by commas. Each x and y coordinate is a point in the base_link coordinate frame of the robot.
You don’t need to add a coordinate pair at the end back to the origin–it will be assumed. So a line segment from the last pair [0.235, -0.215] to first pair [-0.450, -0.215] is assumed in the above example.
If you’re rather, you can declare a circular outline for your robot instead. Using, e.g.,
robot_radius: 0.25 # The outline is a circle with radius 0.25 meters
You don’t nave to use the same foot print list or radius for both the local and global cost maps. You might, for instance, use a footprint list for the local cost map but a radius for the global cost map. The article referenced at the end of this post has a discussion about this.
Here is what the above footprint outline looks like in rviz2 when visualizing my robot (which is also showing LIDAR, SONAR and time-of-flight sensors and the local cost map). The lime-green square box in the center is the footprint.
See Setting Up The Robot’s Footprint for more information.