Non-holonomic modeling​ of mobile robots

Holonomic vs Non Holonomic

In robotics, the terms “holonomic” and “non-holonomic” are used to classify the motion constraints of a robot or a system.

Holonomic constraints

are constraints that depend only on the position and orientation of the system, not on its velocity or acceleration. In other words, a system is holonomic if the number of controllable degrees of freedom is equal to the total degrees of freedom of the system. Holonomic systems can move in any direction in their configuration space.

For example, a robot with mecanum wheels or an omni wheel design can move directly forward, backward, laterally left, and laterally right, as well as rotate in place. This is because the design of these wheels allows for motion in multiple directions without changing the robot’s orientation.

Non-holonomic constraints

are those that involve the velocity of the system and not just its position and orientation. In other words, a system is non-holonomic if the number of controllable degrees of freedom is less than the total degrees of freedom of the system. Non-holonomic systems are limited in their movement due to these constraints.

A common example of a non-holonomic system is a car or a differential drive robot. A car, for instance, cannot move laterally because the orientation of its wheels only allows for forward or backward motion.

In summary, whether a robot is classified as holonomic or non-holonomic depends on its degrees of freedom and the constraints imposed on its motion by its design and control mechanisms. It’s crucial to consider these constraints when designing a robot’s control system.

Modelling

The non-holonomic model for mobile robots is a mathematical framework that represents the constraints imposed on a robot’s movement due to its mechanical design. Non-holonomic constraints are velocity constraints that limit the robot’s ability to move in certain directions. For example, a car cannot move directly sideways due to its wheels’ orientation, presenting a non-holonomic constraint.

A typical example of a non-holonomic system is a differential drive robot. A differential drive robot has two individually powered wheels that are driven independently, and the robot moves by changing the relative speeds of these wheels.

The robot’s movement is typically represented using the configuration vector, which contains the robot’s position and orientation:

Q = [x, y, θ]ᵀ

The non-holonomic constraints for a differential drive robot can be represented by the following model:

dx/dt = vcos(θ) dy/dt = vsin(θ) dθ/dt = ω

where:

  • v is the linear velocity
  • ω is the angular velocity
  • θ is the robot’s orientation

This set of equations is a representation of the non-holonomic model, indicating the robot’s change in position (dx/dt, dy/dt) and orientation (dθ/dt) over time, as a function of its velocity and angular velocity.

In conclusion, non-holonomic modeling is an essential aspect of mobile robotics that allows us to accurately predict and control a robot’s movement despite the constraints imposed by its mechanical design. With an appropriate understanding of these constraints, we can design control systems that effectively navigate the robot in complex environments.


enter image description here

Non-holonomic constraints are basically just all other cases: when the constraints cannot be written as an equation between coordinates (but often as an inequality).

An example of a system with non-holonomic constraints is a particle trapped in a spherical shell. In three spatial dimensions, the particle then has 3 degrees of freedom. The constraint says that the distance of the particle from the center of the sphere is always less than ?: √?2+?2+?2<R. We cannot rewrite this to equality, so this is a non-holonomic, scleronomous constraint.

Some further reading:

http://galileoandeinstein.physics.virginia.edu/7010/CM_29_Rolling_Sphere.pdf

Ackerman steering

Ackermann steering, which is the type of steering geometry used in many automobiles, introduces a non-holonomic constraint.

In a system with Ackermann steering, the vehicle can change its position in the forward and backward direction and can change its orientation by turning its wheels and moving forward or backward, but it cannot move laterally (i.e., to the side without turning). Thus, the motion of the vehicle is constrained to be along the direction that the wheels are steering, and this constraint is velocity-dependent, not just position-dependent.

Therefore, a vehicle with Ackermann steering does not have independent control over all of its degrees of freedom. It is an example of a non-holonomic system. The vehicle has to follow a specific path to reach a desired position and orientation, it cannot simply move there in a straight line (unless the desired position happens to be directly ahead or behind along the vehicle’s current heading).

Before you’re hired: as Javascript developer

Table of Contents

Binary tree

A binary tree is a treedata structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (LSR), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well.

Currying

currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. – wikipedia

High-order function

a higher-order function is a function that does at least one of the following: takes one or more functions as arguments (i.e. procedural parameters), returns a function as its result. All other functions are first-order functions. In mathematics, higher-order functions are also termed operators or functionals.

Event loop

The event loop got its name because of how it’s usually implemented, which usually resembles:

while (queue.waitForMessage()) {
queue.processNextMessage();
}

queue.waitForMessage() waits synchronously for a message to arrive if there is none currently.

A very interesting property of the event loop model is that JavaScript, unlike a lot of other languages, never blocks. Handling I/O is typically performed via events and callbacks, so when the application is waiting for an IndexedDB query to return or an XHR request to return, it can still process other things like user input.

Prototype

When a function is created in JavaScript, the JavaScript engine adds a prototype property to the function. This prototype property is an object (called a prototype object) that has a constructor property by default. constructor property points back to the function on which prototype object is a property. We can access the function’s prototype property using the syntax functionName.prototype.

Encapsulation

Encapsulation refers to enclosing all the functionalities of an object within that object so that the object’s internal workings (its methods and properties) are hidden from the rest of the application. This allows us to abstract or localizes specific set of functionalities on objects.

Can you draw an algorithm?

Realize the difference: Imposter Syndrome and Dunning Kruger Effect

The Dunning–Kruger effect is a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is. The cognitive bias of illusory superiority comes from the inability of low-ability people to recognize their lack of ability. Without the self-awareness of metacognition, low-ability people cannot objectively evaluate their competence or incompetence. – Wikipedia

Impostor syndrome (also known as impostor phenomenon, impostorism, fraud syndrome or the impostor experience) is a psychological pattern in which an individual doubts their accomplishments and has a persistent internalized fear of being exposed as a “fraud”. – Wikipedia

https://twitter.com/nat_sharpe_/status/1277353559756070912?s=21

Sharing Wi-Fi via the Ethernet port on Ubuntu

Your computer (wComputer) is connected to a Wi-Fi network and you want to connect another computer into Wi-Fi network to share the internet connection but this computer (lComputer) doesn’t have a Wi-Fi adapter.

What it has is an Ethernet port and therefore, the wComputer with the Wi-Fi connection can share that network over its Ethernet port to the lComputer.

I am using Ubuntu on the wComputer and windows on the lComputer.

  • Type nm-connection-editor in your terminal.
  • Add a shared network connection by pressing the Add button.
  • Choose Ethernet from the list and press Create.
create a new ethernet connection
  • Click IPv4 Settings in the left.
  • Choose Shared to other computers by clicking the Method drop-down menu.
  • Enter a new name like wifishare as the Connection name at the top
rename it to wifishare (optional) and select share to other computers in IPv4

now, I connected the lComputer with the wComputer using a cable and rebooted the wComputer, i think restarting the networking service should also do the magic.

lComputer will get a IPv4 address from the wComputer 10.42.0.24, which is probably the default on Ubuntu.

[1] https://www.cesariogarcia.com/?p=611

Sending and Receiving message between onboard Raspberry Pi BLE and an Ardui​no with HM 11

Goal:

send messages between the HM 11 BLE modules and my Raspberry PI 3 with its onboard BLE.

How:

Bluez comes installed with the raspbian distro.

Wiring is basic UART for connecting the HM 11 to the Arduino. In my case, I am using a Teensy 3.6 programmed using the Arduino IDE.

Use this serial proxy sketch for getting started. if you have Arduino IDE, open this file. Teensy serial ports are on 0 and 1. I kept it software serial so its easy to adapt for any other board by just changing the pins used for the UART.

Thats all you need to setup on the Arduino side. Everything should work now.

May be try a few AT commands. AT should give you OK if the module is used for the first time. if not OK+LOST which means that the module was connected previously which was lost. I would, either way, try the three AT commands just to be on the safer side. AT+RENEW, AT+RESET, AT.

If you make a PR request for adding a new line after every response from the HM 11 module, it would be great

Next step is to set up the raspberry pi, everything should be available directly up setting up an RPi with a new SD card. Be sure that you are using Raspberry Pi 3 and newer versions which have onboard BLE.

With Commandline:

With Python:

you should know the HM 11 Mac address. you should know it if you did it through command line. if not, you can do AT+ADDR? and add “:” after every two characters.

Install system dependencies, Go into the directory, Install the python dependencies. Before starting the python program, change the address that you want to connect. After you did that, start the program, it should send Hello world once to the device and wait for any messages. If you enter in the serial monitor, it will be printed out by the python program.

# sudo apt-get install build-essential python-pip3 git
# git clone https://github.com/akrv/BLEserialbridge
# cd BLEserialbridge
# pip3 install -r requirements.txt
# python3 serial_bridge.py
below is the AT Commands I sent since setting up the module.


Clock synchronisation in systems communication

Motive

Starting with that, Guide to implementing a Clock synchronisation algorithm on low power wireless sensor networks.

following will be some pointers and papers to read before wandering in the wild to get things done and reinventing the wheel.

Literature survey

  • Message Time-stamping in Sensor Networks
  • Flooding time synchronisation protocol [1]
    • this protocol is very good in terms of single of time synchronisation
    • it uses linear regression, get ready to do some compute on your MCU
    • very good numbers and explanation in those slides [2]
    • Global Clock Skew
  • for multi-hop networks, this one is a good paper to read
  • Temperature Compensated Time Sync [4]
    • The only requirement on that time synchronization protocol is that it can calculate the current frequency error with respect to the reference node’s clock. Unlike other time synchronization protocols, TCTS also records the current temperature during a synchronization exchange. Both the temperature and frequency error at the end of the synchronization exchange is cached in a frequency vs. temperature table in memory. [5]
    • Before attempting subsequent resynchronization, TCTS will measure the current temperature and consults its internal calibration table. If the current frequency error for the measured temperature is cached, TCTS will not attempt to resynchronize with the reference node since a new time estimate is not required. Eventually, when all of the operating temperatures have been observed, TCTS will have auto-calibrated the low-stability clock, essentially providing a TCXO timebase. [5]
    • In addition, since we know that the curve should fit a quadratic function of the form

      fe(T) = −A · (T − T0)2 + B,

      where A is the temperature coefficient, T0 = 20◦C represents room temperature, and B is a frequency error offset, we can fit the measured calibration points to this curve and obtain frequency error estimates for previously unobserved temperatures, thus drastically improving the range and accuracy while eliminating a factory calibration step and allowing on-demand calibration. [5]

  • Virtual High-Resolution Time (VHT) [5]
    • basic idea behind VHT
        • During active periods, the high-frequency clock is turned on, and a hardware counter counts the number of high-frequency clock ticks that occur during each low-frequency clock interval, i.e., there are high-frequency clock ticks during each low-frequency one.

          φ0 = fL / fH

        • At the time of the event, the system records not only the value of the counter sourced by the low-frequency clock but also the value of the counter sourced by the high-frequency clock which was reset at the end of the most recent clock tick of the low-frequency clock. Thus, this second timer will indicate the phase φ within a low-frequency clock tick, allowing an effective resolution to be up-sampled to the high-frequency clock (modulo one cycle of jitter). The event time is sampled as

          tevent=CL ·φ0 +φ.

        • While the high-frequency clock is not phase-locked with the low-frequency clock, the phase error is limited to < 1/fH, and thus is of the same order as the quantization error.

Implementation of VHT in microcontrollers

The requirement is quite simple and the target chipset used in this project has additional two timers.

  1. 2 clocks driven by two oscillators
  2. 2 timers with capture and compare modes.

The Capture mode which will retrieve a timer value based on a signal event.

The Compare mode which will constantly monitor a timer counter value and compare it to a value set in the application. When they match, it will trigger an event.

The target implementation is on CC1350 chipset [6]. They have Four General-Purpose Timer Modules (Eight 16-Bit or Four 32-Bit Timers). With newest hardware revision CC1352r, almost same chip can be used to run Bluetooth 5.0. This has a M4 instead of a ARM M3 processor. This optimization with the increase in the memory footprint can be attributed to the new Bluetooth 5.0 standard.

The application framework used will be Contiki-OS [7]. I will use the latest version that is on development which also supports the SensorTag board and the launchpad [8]. A toolchain setup and compilation of the Contiki-OS is done in this post.

Illustration of the Virtual High-resolution Time event time-stamping mechanism on a micro- controller [5].

The two capture are connected to the event. The paper expects to make the interrupt for start of the frame delimited (SFD) which is previously justified for better resolution on clock synchronisation in figure 2 [5].

An additional capture unit on the fast timer captures the counter on every low frequency rising edge. As the Sync event, where the value of h0 is stored in one of the capture units. Later, when the SFD line rises, the two capture units on the two timers store l0 (the value of the low-frequency counter) and wh1 (the value of the high frequency counter), respectively. Using these three captured values, the event time can be calculated as

tevent=l0·φ0+(h1−h0) modφ0.

Counter overflows are an issue to be tackled. Overflows of the high-frequency counter are not a problem since they run only on a short burst when the system is synchronising / listening for packet. The low-frequency clock must have a counter width of 64-bit since 32-bit counters will overflow by 5 minutes since the low-frequency counters are used for long intervals.

As per the suggestion on [5] a 32KHz and 8MHz clocks are used for the timers. The MCU has Clock Speed up to 48-MHz.

References:

[1] http://www.isis.vanderbilt.edu/sites/default/files/Maroti_M_11_3_2004_The_Floodi.pdf

[2] https://www.tik.ee.ethz.ch/file/5562ed988ca095af8331114dd81c6c70/sensys09_slides.pdf

[3] https://people.mpi-inf.mpg.de/~clenzen/pubs/LSW09optimal.pdf

[4] https://ieeexplore.ieee.org/document/5170185/

[5] https://web.eecs.umich.edu/~prabal/pubs/papers/schmid10vht.pdf

[6] http://www.ti.com/lit/ds/symlink/cc1350.pdf

[7] http://contiki-ng.org/

[8] https://github.com/contiki-ng/contiki-ng/wiki/Platform-srf06-cc26xx

 

decentralized and distributed systems

Pointers for distributed vs decentralised:

Some bookmarks to read if you are a distributed systems researcher [1]

  • A decentralized system is a subset of a distributed system.
  • The primary difference is how/where the “decision” is made and how the information is shared throughout the control nodes in the system.
  • Decentralized means that there is no single point where the decision is made.
  • Every node makes a decision for it’s own behaviour and the resulting system behaviour is the aggregate response.
  • A key characteristic of decentral systems is that typically no single node will have complete system information.
  • “Distributed means that the processing is shared across multiple nodes, but the decisions may still be centralized and use complete system knowledge” says coinbase in their blog post
  • A scenario to think of:
    • Some control algorithms I’ve seen for multiple quad-copter control are purely distributed in that a central over-seer gives optimization problems to each copter to solve then return the solution.
    • The over-seer will then issue commands based on the aggregate result.
  • Here’s a philosophical question:
    • if the over-seer is “voted” into office by the nodes, is it decentralized or centralized?
    • I’d have to say decentralized, but it is arguable says MaRi Eagar
    • then I tend to ask what is distributed systems?

Keywords that matter

when you start the debate (here central systems are included): [2]

  • Points of Failure / Maintenance
  • Fault Tolerance / Stability
  • Scalability / Max Population
  • Ease of development / Creation
  • Evolution / Diversity

References:

  1. What is the difference between decentralized and distributed systems?
  2. Centralized vs Decentralized vs Distributed

Compiling Contiki for CC1350

Setup:

  • macOS 10.13.5
  • VM – VirtualBox
  • Guest OS: Ubuntu 16.04

Setup Contiki on the local machine:

Install the following packages. you might need sudo right.

# sudo apt-get remove gcc-arm-none-eabi gdb-arm-none-eabi binutils-arm-none-eabi
# sudo add-apt-repository http://ppa:team-gcc-arm-embedded/ppa
# sudo apt-get update

Compile Contiki hello-world for Native:

Native means that your code will compile and run directly on your host PC.
for me, it will run on the Ubuntu 16.04 OS on the VM.

cd into your development folder.

# git clone https://github.com/contiki-os/contiki
# cd contiki/examples/hello-world
# make TARGET=native
mkdir obj_native
  CC        ../../core/ctk/ctk-conio.c
  CC        ../../platform/native/./contiki-main.c
  CC        ../../platform/native/./clock.c
  CC        ../../core/dev/leds.c
### output truncated ###
  AR        contiki-native.a
  CC        hello-world.c
  LD        hello-world.native
rm hello-world.co

The target command tells the compiler to compile for the current system.
This is what is later changed for cross compiling to our target platform.

Type the following command to run the hello-world program.

# ./hello-world.native
Contiki-3.x-3343-gbc2e445 started with IPV6, RPL
Rime started with address 1.2.3.4.5.6.7.8
MAC nullmac RDC nullrdc NETWORK sicslowpan
Tentative link-local IPv6 address fe80:0000:0000:0000:0302:0304:0506:0708
Hello, world

if this is how it worked! your compilation worked!

Compiling Contiki-OS for CC13xx

The only thing to be changed is the target. make clean is an important step.

make clean
make TARGET=srf06-cc26xx BOARD=sensortag/cc1350

this says the compilation works if you see the following:

CC ../../cpu/cc26xx-cc13xx/lib/cc13xxware/startup_files/ccfg.c
CC ../../cpu/cc26xx-cc13xx/./ieee-addr.c
AR contiki-srf06-cc26xx.a
CC ../../cpu/cc26xx-cc13xx/./fault-handlers.c
CC ../../cpu/cc26xx-cc13xx/lib/cc13xxware/startup_files/startup_gcc.c
CC hello-world.c
LD hello-world.elf
arm-none-eabi-objcopy -O ihex hello-world.elf hello-world.i16hex
srec_cat hello-world.i16hex -intel -o hello-world.hex -intel
arm-none-eabi-objcopy -O binary --gap-fill 0xff hello-world.elf hello-world.bin
cp hello-world.elf hello-world.srf06-cc26xx
rm hello-world.i16hex hello-world.co obj_srf06-cc26xx/fault-handlers.o obj_srf06-cc26xx/startup_gcc.o

This error absolutely means that the installation of the compiler for cross-compilation wasn’t successful.

/bin/sh: 1: arm-none-eabi-gcc: not found

Next step would be to compile with contiki-ng since, there is active development is happening there and to base all developments to that repo.

# git clone https://github.com/contiki-ng/contiki-ng
# cd contiki-ng
# git submodule update --init --recursive

navigate to the hello-world example and hit the following

# make clean make TARGET=srf06-cc26xx BOARD=sensortag/cc1350

References:

[1] https://sunmaysky.blogspot.com/2015/08/setup-6lbr-to-run-6lowpan-with-cc2531.html

[2]https://sunmaysky.blogspot.com/2015/09/contiki-subg-hz-6lowpan-on-cc1350.html

SensyLight: sensible atmosphere using Internet of Things

https://vimeo.com/192196471

The above video is from a research lab at the MIT Media Lab called Responsive Environments. They have a really interesting article [1] about a multimodal mediated work environment.

Internet of Things has been of great buzz these days. It is interesting, but why is it interesting? Just made a project/home lighting on this project.

So, here is the scenario for the internet of things. The thing in the Internet of Things is the web controlled lights – LED strip. The “control” part of the lights is managed by the Arduino. The task of Arduino would be to “GET” data that matters and send that info to the light strip. The ways in which the LED strip can be manipulated from the Arduino can be listed as

  • 1 LED can have 3 inputs, R G B.
  • Each R G B can value from 0 – 255 which is 256 values.
  • There are 32 LEDs in the strip.

Which makes a lot of math and logical decisions for the Arduino to handle. That is the whole point of networking these devices, now they have access to on-demand computing resources. This means we need the medium to connect the Arduino to the internet. That is done using the WiFi module which communicates with the Arduino using USART communication.

There are two ways to handle the information flow:

  • To directly to give out RGB information an hour
    • A lot of data transfer between the devices, but all of the computing, the decision is taken care of by the remote server
    • the device is highly dependent on the connection for operation
  • To query the server for Sunlight information (sunrise and sunset) and compute the colour information.
    • Having an update system to change the “compute” algorithm makes it highly robust.
    • Gets information for a day / a week and then “compute” with the algorithm. Also, listen for any settings like
      • Party lights
      • Work lights
      • get back to sunlight operation

Tools used:

Low cost, easy implementation of mediated atmospheres, to make your apartment to provide sensible lighting that can help harmonize the body and mind with the circadian rhythm.

[1] 17161.JosephA.Paradiso.Preprint1.pdf

[2] 192196471