2023 03 25 Raven Status Update

A fair amount of progress as been made on Raven over the last few days.

I have list of TODO items that need to be tackled. One of those items has been on the list or months and this week it bubbled to the top. My proximity sensor values were being time stamped with old values.

In ROS (Robot Operating System), the whole software stack that tries to generate commands to move the robot somewhere interesting relies on a flood of data coming in, like SONAR readings, time of flight readings and LIDAR readings. Each of those readings comes with a timestamp indicating when the reading was taken.

On my robot, the readings are made on a custom PC board I made, and then sent to the main computer of the robot. The custom board needs to have the same sense of “now”-ness as the main PC — the two computers have to agree within a few milliseconds on what time it is now. It doesn’t do any good to take a SONAR reading and then have it show up a half second late to the main computer. The algorithms all have a small “tolerance” for time skew, and if sensor readings are too old, they are rejected out of hand as useless.

The software piece I use to transmit my sensor values from the custom board to the main computer is a package called “Micro ROS”. I use this software with some small customization as my custom board has a lot more capability to it that the usual, intended board using Micro ROS. 

One function built into Micro ROS is the ability to ask another computer what time it is, and set its own clock to match that other computer. But setting it just once doesn’t work. Each computer has a clock driven by a crystal-controlled oscillator, and crystals drift as they heat up. Worse, much worse, the CPU clock chip in my custom board seems to “burp” now and then when interrupt signals come in and my hardware generates a fair number of interrupt signals.

Another problem is that Micro ROS has baked into it an expectation that the main computer is using a particular version of communication software, and the expected version currently has bugs which cause my stereo cameras to not operate correctly. It took a bit of reading for me to realize that little factoid.

For the moment, I can ignore that, so I set my main computer back to using the buggy communication software. Also, when Micro ROS asks for the current time, it takes “quite a bit of time” to get the result, usually about 150 milliseconds, but sometimes as much as a second. So any answer it gets from the main PC will be inherently wrong right away.

My last few days of programming have been devoted to finding someway to make that all work with an allowable tolerance for errors in timing. I tried over and over, and I’m normally very good at finding creative solutions to tricky problems. Still my code got progressively worse the more I tried to fix the problem. And then, my robot buddy Ralph Hipps called for one of our at-least-daily robot chats and in the process of explaining the problem to him, it occurred to me what the root cause was. My custom board was attempting to do a complex operation during interrupt handling.

Interrupt handlers on a computer must be very quick. If your interrupt handler code takes more than about a millisecond, sometimes even only a few tens of microseconds, everything falls apart. And because I was explaining the symptoms to someone else, I finally realized that I was taking probably tens of milliseconds in my interrupt handler for the SONAR sensors.

Once I realized that, the fix was easy. The details aren’t too important, but I stashed the important information from the signal in a shared, global location and exited the interrupt handler. Then the normal, non-interrupt code picked up the stashed information for processing. Outside of the interrupt handler, you can largely take whatever time you want to process data. Sort of. ROS navigation depends heavily on high “frame rates” from sensors. If you can’t generate, say, LIDAR data 10 times a second or faster, you have to slow your robot way down, otherwise the robot would move faster than it could react to, say, the cat rearing up in front of your robot with nunchucks in hand.

The robot is now happily sending sensor data nearly three times faster than before, about 80 frames a second and rarely gets the time out of sync by very much. Now I can move on to the new problems that have shown up because I fixed this problem.

Below is an obligatory snap shot of a part of the visualization software showing my robot, in a room, with LIDAR, SONAR and time of flight sensors showing as well. No cat is in this picture.

Everything about robots is hard (TM)

2021 10 12 Motor Mount Problems

Everything about robots is hard. For example.

Puck motor twists away from frame

I have been doing experiments with Puck while it was sitting on the top of a desk. To help prevent it accidentally rolling about, I enabled the wheel locks for the two caster wheels in back. I was ready to try an experiment with an obstacle avoidance block of code I have been developing, so I put Puck on the floor in a hallway and brought up the software stack.

As is often the case, I also brought up a keyboard teleoperation node which allows me to drive the computer manually and remotely. I have a data visualizer on my desktop computer (rviz2) that lets me look at the LIDAR scans, the proximity sensors data, any of the camera videos, a mockup of the physical robot placed in the house, and so on, and the robot can be far away from me.

As I began to move the robot, it didn’t seem to be steering as I expected. It kept pulling to the right and I heard some unusual sounds coming from the hallway. But my mind was on the new feedback in the display which was showing me an idea of potential hazards along the way along with a recommendation of how to avoid the hazards. It was all fun until it wasn’t. Eventually there was a grinding sound coming from the hallway.

I went out to look and the picture shows what I saw. The right wheel and motor were twisted around and the wheel was grinding against the frame of the robot. I could barely imagine how this was even possible. My first thought was that there was a collision and the four bolts holding the motor to the frame had sheared off.

I put the robot back on the desktop and took off the top plate with the control panel and LIDAR, then the middle plate with the two computers and the camera array, and then I could look at the motor mounts. What had actually happened is that three of the four screws holding the motor had come out of the holes. How is that possible, you may ask?

The screws are 6-32 screws and after the initial robot construction I added a spacer between the motor and the chassis as the rear of the motor body is slightly wider that the front of the motor body and the right angle gear box. The screws fit into that gearbox. The spacers give room for the motor to lie completely parallel with the plane of the frame. When I added those spacers (just washers), it made is so the screws no longer reached completely through the gearbox housing. In fact, and I didn’t measure this when I did it, the screws barely screwed into the motor housing at all. And the torque from one locked caster wheel on the wheel-coupler-gearbox assembly was enough to really twist the motor so that three of those screws popped out.

The fix was easy enough—I replaced the screws with longer screws. But I still don’t have, for example, a sensor to detect if the caster wheels are locked or not before driving. That is still an open problem for human error for the robot. I could remove the wheel locks, but without a specialized frame to hold the robot off the floor when I run experiments where I want to motors to turn but the robot to not actually go anywhere, without that frame the wheel locks are my safety feature to prevent the robot from rolling off the desk.

Eventually, everything goes wrong.

The Motivation

Source: https://ourworldindata.org/population-aged-65-outnumber-children

Worldwide, the number of elderly people (65 years and older) surpassed the number of children under 5 years of age in 2019. It is likely, in my lifetime, that there will be more people needing assistance in their lives than there will people who can provide it. I don’t expect to be able to afford especially extraordinary care, nor do I expect not to need it as I age. My alternative is to create the needed technology so it will be there when I need it.

Wimble Robotics is a one-person effort to create an assistive robot for my personal use.

I understand that this will be hard. In fact, having spent several years on the effort so far, I know that everything about robots is hard. This is a theme that you will see me remind you again and again.

For years I have been creating various experimental robots, from tiny robots the size of your hand, to robots the size of an electric wheelchair. My expectation was not that I would quickly reach even that dreamed of major goal of the home roboticist, that of fetching a drink from the refrigerator and bringing it to you. Rather, there is so much technology that is required for a robot to be useful even in a tiny way. This is the hardest learning journey of my life, so far.

Creating a robot that is trustworthy (it won’t harm me or my possessions), reliable (it will work when I turn it on) and predictable (it will do the tasks as designed, repeatedly as needed) is a tall order, indeed.

As I’ve often said in my talks to various groups about robotics, just because you tell a motor controller to move the motors, doesn’t mean that the motors actually move. If the motors move, it doesn’t mean that the attached wheels turn. If the wheels turn, it doesn’t mean that the robot will move at all. If the robot moves, it doesn’t mean that it will move as commanded.

Everything about robots is hard. Batteries fail, Electronics fail. Screws loosen or fall out. Wires break from vibration. Networks stop communicating. Wheels get slippery. Robots are a breeding ground for failure.

Every few months, I create a new robot which improves on the predecessor because of what I learned up to that point. Sometimes I want to explore one new aspect, like reliable sensing of the position of the robot over time, relative to some time in the past. Sometimes I want to try out a new bit of technology, like stereo depth-sensing cameras, or time of flight distance sensing. Sometimes I work on just one dimension of the overall problem, such as fallback technology for when the network fails, or reliable startup of the robot. Progress is made with each generation. But the progress is slow. Especially for a single-person effort made by a retired computer scientist.

This blog is a somewhat free form brain dump of the journey to make a personal, assertive robot, to be ready when I need it.