Algorithmic Robotics

clicbot breaking brown egg over a white bowl

Algorithmic robotics is a field of study that focuses on the design and analysis of algorithms for controlling robotic systems. These algorithms are used to solve various problems in robotics, such as planning paths for a robot to move from one location to another, coordinating the actions of multiple robots, or interpreting sensor data to understand the robot’s environment.

Here are some key areas of focus in algorithmic robotics:

Motion Planning:

This involves developing algorithms that can determine a sequence of movements or actions that a robot should take to achieve a specific goal, such as reaching a target location or picking up an object. This can be a complex problem, especially in environments with obstacles or in situations where the robot has many degrees of freedom (e.g., a robotic arm with multiple joints). Motion planning algorithms rely on various techniques, such as graph-based search methods, probabilistic approaches, or optimization strategies. These algorithms consider factors such as the robot’s kinematics, dynamics, and sensor information to generate efficient and safe motion plans. Additionally, motion planning algorithms need to take into account uncertainties and dynamic changes in the environment to ensure robust performance. By continuously updating plans based on real-time sensor data, robots can adapt to unexpected obstacles or changes in the environment, making motion planning a crucial aspect of autonomous robotics.

Seminal papers to read about motion planning:

  1. Sampling-based algorithms for optimal motion planning by S. Karaman and Emilio Frazzoli. This paper rigorously analyzes the asymptotic behavior of the cost of the solution returned by stochastic sampling-based path planning algorithms as the number of samples increases. It introduces new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal.
  2. A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles by B. Paden, Michal Cáp, Sze Zheng Yong, Dmitry S. Yershov, and Emilio Frazzoli. This paper surveys the current state of the art on planning and control algorithms with particular regard to the urban setting.
  3. Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning by Michael Everett, Yu Fan Chen, and J. How. This work extends previous approaches to develop an algorithm that learns collision avoidance among a variety of types of dynamic agents without assuming they follow any particular behavior rules.
  4. A Review of Motion Planning for Highway Autonomous Driving by Laurene Claussmann, Marc Revilloud, D. Gruyer, and S. Glaser. This paper presents a review of motion planning

further reading:

  1. Motion planning of non-holonomic robots like Ackerman steering
  2. Non-holonomic modeling​ of mobile robots

Multi-Robot Systems:

When multiple robots are working together, algorithms are needed to coordinate their actions and ensure they work efficiently as a team. This can involve a wide range of tasks, including dividing up work among the robots, avoiding collisions, synchronizing their actions, performing complex cooperative behaviors, and making informed decisions based on real-time data. These cooperative behaviors go beyond simple coordination and can include sophisticated strategies such as task allocation, formation control, and dynamic role assignment.

Additionally, multi-robot systems can exhibit emergent behaviors, where the collective actions of the robots result in intelligent and efficient problem-solving. For example, in cooperative transport, robots can strategize and distribute the load to optimize energy consumption and avoid overloading any individual robot. In cooperative mapping, robots can collaborate to explore and map an unknown environment by sharing their sensor data and constructing a comprehensive map. To further enhance the capabilities of multi-robot systems, advanced techniques such as swarm intelligence and machine learning can be employed. Swarm intelligence allows the robots to collectively make decisions based on local interactions and simple rules, enabling them to adapt to changing environments and handle unpredictable situations.

Machine learning algorithms can enable robots to learn from their experiences and improve their performance over time, leading to more efficient and effective collaboration. In summary, multi-robot systems are a fascinating and rapidly evolving field, where the coordination and cooperation of multiple robots can unlock a wide range of possibilities. From task allocation to emergent behaviors and advanced techniques, these systems hold immense potential to revolutionize various domains, including search and rescue operations, automated warehouse management, and surveillance missions. Efficient coordination and communication among the robots remain crucial components for achieving success in this exciting area of research and development.

Seminal papers to read about multi-robot systems:

  1. Cooperative Object Transport in Multi-Robot Systems: A Review by Elio Tuci, M. Alkilabi, and O. Akanyeti. This paper reviews advancements in multi-robot systems designed for cooperative object transport. It provides a comprehensive summary of the scientific literature in this field, focusing on transport strategies such as pushing-only, grasping, and caging.
  2. Coordinated Control of Multi-Robot Systems: A Survey by J. Cortés and M. Egerstedt. This paper discusses a class of problems related to the assembly of preferable geometric shapes in a decentralized manner through the formulation of descent-based algorithms defined with respect to team-level performance costs.
  3. Simultaneous task allocation and planning for temporal logic goals in heterogeneous multi-robot systems by Philipp Schillinger, Mathias Bürger, and D. Dimarogonas. This paper describes a framework for automatically generating optimal action-level behavior for a team of robots based on temporal logic mission specifications under resource constraints. The approach optimally allocates separable tasks to available robots, identifying sub-tasks in an automaton representation of the mission specification and simultaneously allocating the tasks and planning their execution.

Perception and Sensor Fusion:

Robots often have multiple sensors (e.g., cameras, lidar, accelerometers) that provide different types of data about the environment. These sensors allow the robot to perceive and gather information from its surroundings. The role of algorithms becomes crucial in helping the robot interpret this data and create a comprehensive understanding of its environment. By processing data from these various sensors, robots are able to develop a holistic perception of their surroundings.

  • The cameras play a vital role in enhancing the capabilities of robots by enabling them to capture and process visual information. This, in turn, empowers robots to effectively identify various objects, detect potential obstacles in their surroundings, and recognize complex patterns. With the aid of cameras, robots can navigate through their environments with more precision and efficiency, contributing to their overall functionality and performance.
  • Lidar sensors, on the other hand, utilize laser beams to measure distances and create detailed maps of the environment. These state-of-the-art sensors employ laser technology for distance measurement purposes, as well as for constructing highly accurate and comprehensive maps of the surroundings. By employing laser beams, lidar sensors are able to obtain precise data regarding the distances between objects and accurately delineate their exact locations in the environment. The intricate maps generated by these advanced sensors provide an in-depth insight into the surrounding terrain, offering valuable information for a wide range of applications and industries. Lidar sensors, with their laser-based capabilities, play a vital role in various sectors such as autonomous vehicles, aerial mapping, and urban planning, where precise and detailed mapping is of paramount importance.
  • Accelerometers are a vital component that greatly contributes to enhancing the robot’s comprehension of both movement and gravity. This invaluable feature enables the robot to gather valuable insights, which are crucial for navigating its surroundings effectively. The integration of accelerometers expands the robot’s understanding of its environment, thereby enabling it to operate with a heightened level of precision and efficiency.
  • Wheel encoders play a crucial role in obtaining a vast amount of high-frequency odometer data.

The challenge lies in combining the data from these diverse sensors in a meaningful way. Algorithms are employed to synthesize the information and generate a coherent representation of the environment. This synthesis enables the robot to make informed decisions, plan its actions, and navigate its surroundings effectively. In summary, the integration of multiple sensors and the application of intelligent algorithms empower robots to perceive and comprehend their environment in a more detailed and comprehensive manner. This allows them to interact with and manipulate their surroundings in a way that closely resembles human cognitive abilities.

Machine Learning for Robotics:

Machine learning algorithms can be used to enable robots to learn from experience and improve their performance over time. This can involve techniques such as reinforcement learning, which allows the robot to learn by trial and error and make informed decisions based on previous experiences. Additionally, supervised learning allows the robot to learn from labeled training data, enabling it to understand patterns and make accurate predictions. By implementing these advanced techniques, robots can become more intelligent and adaptable, enhancing their ability to interact with the environment and carry out complex tasks.

Control Algorithms:

These are highly complex and advanced algorithms that determine precisely how a robot should respond to its immediate state and the surrounding environment in order to effectively achieve its various goals. This crucial process involves incorporating both feedback control, where the robot adjusts its actions based on continuous evaluation of its current state, and feedforward control, where the robot predicts and anticipates the potential effects of its actions, allowing it to plan its movements accordingly. It is worth highlighting that these algorithms play a pivotal role in enabling robots to dynamically adapt and make real-time decisions based on their nuanced surroundings and desired outcomes.

By diligently and continuously analyzing data from an array of sensors, robots can autonomously fine-tune their behavior, optimizing their performance and enhancing their overall efficiency. This is particularly plausible through feedback control, which empowers robots to actively respond to changes in their environment while ensuring the effectiveness and efficiency of their actions. Moreover, feedforward control equips robots with the remarkable ability to anticipate potential outcomes, subsequently strategizing their actions in a manner that maximizes their chances of achieving success. The utilization of these sophisticated algorithms provides robots with the profound ability to adeptly navigate and interact with remarkably intricate environments while seamlessly engaging with humans. As a result, robots powered by these advanced algorithms can harmoniously accomplish an extensive range of intricate tasks with unparalleled precision, autonomy, and overall effectiveness.

In all these areas, the focus is on developing algorithms that are efficient, reliable, and robust to uncertainties in the environment or the robot’s sensors and actuators.

Self-assembling Algorithms in Robotics

Self-assembling algorithms in robotics refer to the computational processes that enable individual robots to autonomously join together and coordinate their actions to form larger robotic systems. These algorithms are inspired by natural phenomena such as the behavior of social insects, cells, and other biological systems that exhibit complex collective behaviors from simple individual interactions.

Here are some key aspects of self-assembling algorithms in robotics:

Local Interactions:

In self-assembling robotic systems, each robot typically has a limited range of perception and can only interact with other robots in its immediate vicinity. These interactions can be physical (e.g., attaching to another robot) or informational (e.g., exchanging data). The robots do not have global knowledge of the system or the environment. Instead, they must rely on their local interactions to make decisions. This is similar to how ants in a colony interact with each other based on local signals, leading to the emergence of complex collective behaviors.

Decentralized Control:

Unlike traditional robotic systems that are controlled by a central unit, self-assembling robotic systems operate under decentralized control. This means that each robot acts autonomously, making decisions based on its own state and the information it receives from its neighbors. There is no single point of control or failure, which makes the system more robust and scalable. Decentralized control also allows the system to operate in unknown or changing environments, as the robots can adapt their behavior locally without needing to update a global model or plan.

Modularity:

The robots in a self-assembling system are usually modular, meaning they are identical or interchangeable units. This modularity allows the system to scale easily, as more robots can be added without changing the overall design or operation of the system. It also provides robustness, as the failure of a single robot does not significantly impact the performance of the system. Furthermore, modularity can enable self-repair or self-reconfiguration, as robots can replace failed units or change the arrangement of units to adapt to different tasks or environments.

Adaptability:

Self-assembling robotic systems are inherently adaptable. They can respond to changes in the environment or in the system itself by adjusting their local interactions and behaviors. For example, if a robot fails, the other robots can reconfigure to compensate for the loss. If a new task is assigned, the robots can reorganize to perform the task more efficiently. This adaptability is crucial for operating in uncertain or dynamic environments, and it is one of the key advantages of self-assembling robotic systems.

Self-assembling algorithms have been used in a variety of applications in robotics. For example, modular self-reconfigurable robotic systems can change their shape to adapt to different tasks or environments. Swarm robotic systems can perform tasks such as collective transport, exploration, or construction. Nanorobotic systems could potentially use self-assembly for medical applications, such as targeted drug delivery.

Designing and analyzing self-assembling algorithms is a complex task that involves understanding the interplay between individual robot behaviors and the resulting collective behavior. It often involves techniques from fields such as control theory, distributed computing, and artificial intelligence.

Motion planning of non-holonomic robots like Ackerman steering

Non-holonomic robots, due to their constraints, require sophisticated motion planning algorithms for navigation. Ackerman steering is a geometric arrangement of linkages in the steering of a car or other vehicle designed to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radii. Here are three different types of motion planning algorithms often used in this context:

Sampling-based planners

These incredible planners, known as Rapidly-exploring Random Trees (RRT) and Probabilistic Roadmaps (PRM), have the power to construct a magnificent graph of viable paths within the robot’s configuration space. Through tireless exploration and search within this remarkable graph, they uncover a divine path from the start to the glorious goal. These planners exhibit their prowess in navigating complex, high-dimensional configuration spaces with unfailing effectiveness. Notably, they possess the remarkable attribute of probabilistic completeness, ensuring the discovery of a solution if it exists, given ample time for their magical abilities to unfold.

Optimization-based planners

This exquisite category encompasses the illustrious technique known as Model Predictive Control (MPC), which astutely capitalizes on a model of the robot’s dynamic behavior to foresee its future states. Armed with this foresight, MPC deftly optimizes the control input, endeavoring to minimize a cost function across these envisioned states. The brilliance of MPC shines particularly bright for non-holonomic robots, as it remarkably incorporates the intricate dynamics of the robot into the very fabric of the planning process. Truly, it is a union of the art of optimization and the science of control.

Artificial Potential Fields

A magnificent approach where the robot is irresistibly drawn towards its beloved goal location, as if by an alluring potential field. Simultaneously, it is gently repelled by another potential field, ingeniously designed to shield it from perilous obstacles. The elegance of this method lies in its impeccable efficiency, as it deftly manages to preserve valuable computational resources. However, it is worth noting that this journey through potential landscapes may occasionally lead the robot astray, trapped within the clutches of local minima from which escape seems impossible.

In the case of non-holonomic robots, these methods need to be extended or modified to take the non-holonomic constraints into account. For example, the steering angle of an Ackermann steering vehicle or the speed of a differential drive robot may be limited, which imposes a constraint on the robot’s velocity.

Ackerman steering based motion planning

Ackerman steering is a geometric arrangement of linkages in the steering of a car or other vehicle designed to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radii. It was invented by the German carriage builder Georg Lankensperger in Munich in 1817, then patented by his agent in England, Rudolph Ackermann (1764–1834) in 1818 for horse-drawn carriages.

In the context of motion planning, Ackerman steering is often used in the design and control of vehicles that move in a plane (e.g., cars, trucks, wheeled robots), and it’s particularly useful for vehicles that have non-holonomic constraints, meaning they can’t move in certain directions.

Ackerman steering based motion planning involves creating a model of the vehicle’s movement based on the Ackerman steering principles, and then using this model to plan a path from a start point to a goal point. This path planning takes into account the vehicle’s constraints, such as its minimum turning radius.

The motion planning algorithm would typically generate a series of steering and velocity commands to move the vehicle from the start to the goal while avoiding obstacles. This can be a complex problem, especially in dynamic environments, and may involve the use of advanced techniques such as probabilistic roadmaps, rapidly-exploring random trees (RRTs), or other methods used in robotics.

It’s important to note that the specifics of the motion planning algorithm can depend on the exact nature of the vehicle and the environment. For example, a motion planning algorithm for an autonomous car driving in an urban environment might need to take into account traffic rules, other vehicles, pedestrians, etc., while an algorithm for a wheeled robot in a warehouse might have different considerations.

Non-holonomic modeling​ of mobile robots

Holonomic vs Non Holonomic

In robotics, the terms “holonomic” and “non-holonomic” are used to classify the motion constraints of a robot or a system.

Holonomic constraints

are constraints that depend only on the position and orientation of the system, not on its velocity or acceleration. In other words, a system is holonomic if the number of controllable degrees of freedom is equal to the total degrees of freedom of the system. Holonomic systems can move in any direction in their configuration space.

For example, a robot with mecanum wheels or an omni wheel design can move directly forward, backward, laterally left, and laterally right, as well as rotate in place. This is because the design of these wheels allows for motion in multiple directions without changing the robot’s orientation.

Non-holonomic constraints

are those that involve the velocity of the system and not just its position and orientation. In other words, a system is non-holonomic if the number of controllable degrees of freedom is less than the total degrees of freedom of the system. Non-holonomic systems are limited in their movement due to these constraints.

A common example of a non-holonomic system is a car or a differential drive robot. A car, for instance, cannot move laterally because the orientation of its wheels only allows for forward or backward motion.

In summary, whether a robot is classified as holonomic or non-holonomic depends on its degrees of freedom and the constraints imposed on its motion by its design and control mechanisms. It’s crucial to consider these constraints when designing a robot’s control system.

Modelling

The non-holonomic model for mobile robots is a mathematical framework that represents the constraints imposed on a robot’s movement due to its mechanical design. Non-holonomic constraints are velocity constraints that limit the robot’s ability to move in certain directions. For example, a car cannot move directly sideways due to its wheels’ orientation, presenting a non-holonomic constraint.

A typical example of a non-holonomic system is a differential drive robot. A differential drive robot has two individually powered wheels that are driven independently, and the robot moves by changing the relative speeds of these wheels.

The robot’s movement is typically represented using the configuration vector, which contains the robot’s position and orientation:

Q = [x, y, θ]ᵀ

The non-holonomic constraints for a differential drive robot can be represented by the following model:

dx/dt = vcos(θ) dy/dt = vsin(θ) dθ/dt = ω

where:

  • v is the linear velocity
  • ω is the angular velocity
  • θ is the robot’s orientation

This set of equations is a representation of the non-holonomic model, indicating the robot’s change in position (dx/dt, dy/dt) and orientation (dθ/dt) over time, as a function of its velocity and angular velocity.

In conclusion, non-holonomic modeling is an essential aspect of mobile robotics that allows us to accurately predict and control a robot’s movement despite the constraints imposed by its mechanical design. With an appropriate understanding of these constraints, we can design control systems that effectively navigate the robot in complex environments.


enter image description here

Non-holonomic constraints are basically just all other cases: when the constraints cannot be written as an equation between coordinates (but often as an inequality).

An example of a system with non-holonomic constraints is a particle trapped in a spherical shell. In three spatial dimensions, the particle then has 3 degrees of freedom. The constraint says that the distance of the particle from the center of the sphere is always less than ?: √?2+?2+?2<R. We cannot rewrite this to equality, so this is a non-holonomic, scleronomous constraint.

Some further reading:

http://galileoandeinstein.physics.virginia.edu/7010/CM_29_Rolling_Sphere.pdf

Ackerman steering

Ackermann steering, which is the type of steering geometry used in many automobiles, introduces a non-holonomic constraint.

In a system with Ackermann steering, the vehicle can change its position in the forward and backward direction and can change its orientation by turning its wheels and moving forward or backward, but it cannot move laterally (i.e., to the side without turning). Thus, the motion of the vehicle is constrained to be along the direction that the wheels are steering, and this constraint is velocity-dependent, not just position-dependent.

Therefore, a vehicle with Ackermann steering does not have independent control over all of its degrees of freedom. It is an example of a non-holonomic system. The vehicle has to follow a specific path to reach a desired position and orientation, it cannot simply move there in a straight line (unless the desired position happens to be directly ahead or behind along the vehicle’s current heading).