Vision or Delusion? When to Believe in Your Version of the Future

Podcast URL, listen here.

Victor and Aswin, co-founders of Theo, have an ambitious goal: to revolutionise delivery using semi-autonomous robots. The team are moving at lightning pace.

But it’s not always been easy to get investors to buy into their bold vision of the future.

They join psychologist Dr. Gena Gorlin to explore how they’ve been able to keep their convictions strong in the face of all the noes. Gena and co-host Alice Bentinck reflect on the conversation throughout, pulling out valuable learnings for other ambitious founders.

listen to the pod

Algorithmic Robotics

clicbot breaking brown egg over a white bowl

Algorithmic robotics is a field of study that focuses on the design and analysis of algorithms for controlling robotic systems. These algorithms are used to solve various problems in robotics, such as planning paths for a robot to move from one location to another, coordinating the actions of multiple robots, or interpreting sensor data to understand the robot’s environment.

Here are some key areas of focus in algorithmic robotics:

Motion Planning:

This involves developing algorithms that can determine a sequence of movements or actions that a robot should take to achieve a specific goal, such as reaching a target location or picking up an object. This can be a complex problem, especially in environments with obstacles or in situations where the robot has many degrees of freedom (e.g., a robotic arm with multiple joints). Motion planning algorithms rely on various techniques, such as graph-based search methods, probabilistic approaches, or optimization strategies. These algorithms consider factors such as the robot’s kinematics, dynamics, and sensor information to generate efficient and safe motion plans. Additionally, motion planning algorithms need to take into account uncertainties and dynamic changes in the environment to ensure robust performance. By continuously updating plans based on real-time sensor data, robots can adapt to unexpected obstacles or changes in the environment, making motion planning a crucial aspect of autonomous robotics.

Seminal papers to read about motion planning:

  1. Sampling-based algorithms for optimal motion planning by S. Karaman and Emilio Frazzoli. This paper rigorously analyzes the asymptotic behavior of the cost of the solution returned by stochastic sampling-based path planning algorithms as the number of samples increases. It introduces new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal.
  2. A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles by B. Paden, Michal Cáp, Sze Zheng Yong, Dmitry S. Yershov, and Emilio Frazzoli. This paper surveys the current state of the art on planning and control algorithms with particular regard to the urban setting.
  3. Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning by Michael Everett, Yu Fan Chen, and J. How. This work extends previous approaches to develop an algorithm that learns collision avoidance among a variety of types of dynamic agents without assuming they follow any particular behavior rules.
  4. A Review of Motion Planning for Highway Autonomous Driving by Laurene Claussmann, Marc Revilloud, D. Gruyer, and S. Glaser. This paper presents a review of motion planning

further reading:

  1. Motion planning of non-holonomic robots like Ackerman steering
  2. Non-holonomic modeling​ of mobile robots

Multi-Robot Systems:

When multiple robots are working together, algorithms are needed to coordinate their actions and ensure they work efficiently as a team. This can involve a wide range of tasks, including dividing up work among the robots, avoiding collisions, synchronizing their actions, performing complex cooperative behaviors, and making informed decisions based on real-time data. These cooperative behaviors go beyond simple coordination and can include sophisticated strategies such as task allocation, formation control, and dynamic role assignment.

Additionally, multi-robot systems can exhibit emergent behaviors, where the collective actions of the robots result in intelligent and efficient problem-solving. For example, in cooperative transport, robots can strategize and distribute the load to optimize energy consumption and avoid overloading any individual robot. In cooperative mapping, robots can collaborate to explore and map an unknown environment by sharing their sensor data and constructing a comprehensive map. To further enhance the capabilities of multi-robot systems, advanced techniques such as swarm intelligence and machine learning can be employed. Swarm intelligence allows the robots to collectively make decisions based on local interactions and simple rules, enabling them to adapt to changing environments and handle unpredictable situations.

Machine learning algorithms can enable robots to learn from their experiences and improve their performance over time, leading to more efficient and effective collaboration. In summary, multi-robot systems are a fascinating and rapidly evolving field, where the coordination and cooperation of multiple robots can unlock a wide range of possibilities. From task allocation to emergent behaviors and advanced techniques, these systems hold immense potential to revolutionize various domains, including search and rescue operations, automated warehouse management, and surveillance missions. Efficient coordination and communication among the robots remain crucial components for achieving success in this exciting area of research and development.

Seminal papers to read about multi-robot systems:

  1. Cooperative Object Transport in Multi-Robot Systems: A Review by Elio Tuci, M. Alkilabi, and O. Akanyeti. This paper reviews advancements in multi-robot systems designed for cooperative object transport. It provides a comprehensive summary of the scientific literature in this field, focusing on transport strategies such as pushing-only, grasping, and caging.
  2. Coordinated Control of Multi-Robot Systems: A Survey by J. Cortés and M. Egerstedt. This paper discusses a class of problems related to the assembly of preferable geometric shapes in a decentralized manner through the formulation of descent-based algorithms defined with respect to team-level performance costs.
  3. Simultaneous task allocation and planning for temporal logic goals in heterogeneous multi-robot systems by Philipp Schillinger, Mathias Bürger, and D. Dimarogonas. This paper describes a framework for automatically generating optimal action-level behavior for a team of robots based on temporal logic mission specifications under resource constraints. The approach optimally allocates separable tasks to available robots, identifying sub-tasks in an automaton representation of the mission specification and simultaneously allocating the tasks and planning their execution.

Perception and Sensor Fusion:

Robots often have multiple sensors (e.g., cameras, lidar, accelerometers) that provide different types of data about the environment. These sensors allow the robot to perceive and gather information from its surroundings. The role of algorithms becomes crucial in helping the robot interpret this data and create a comprehensive understanding of its environment. By processing data from these various sensors, robots are able to develop a holistic perception of their surroundings.

  • The cameras play a vital role in enhancing the capabilities of robots by enabling them to capture and process visual information. This, in turn, empowers robots to effectively identify various objects, detect potential obstacles in their surroundings, and recognize complex patterns. With the aid of cameras, robots can navigate through their environments with more precision and efficiency, contributing to their overall functionality and performance.
  • Lidar sensors, on the other hand, utilize laser beams to measure distances and create detailed maps of the environment. These state-of-the-art sensors employ laser technology for distance measurement purposes, as well as for constructing highly accurate and comprehensive maps of the surroundings. By employing laser beams, lidar sensors are able to obtain precise data regarding the distances between objects and accurately delineate their exact locations in the environment. The intricate maps generated by these advanced sensors provide an in-depth insight into the surrounding terrain, offering valuable information for a wide range of applications and industries. Lidar sensors, with their laser-based capabilities, play a vital role in various sectors such as autonomous vehicles, aerial mapping, and urban planning, where precise and detailed mapping is of paramount importance.
  • Accelerometers are a vital component that greatly contributes to enhancing the robot’s comprehension of both movement and gravity. This invaluable feature enables the robot to gather valuable insights, which are crucial for navigating its surroundings effectively. The integration of accelerometers expands the robot’s understanding of its environment, thereby enabling it to operate with a heightened level of precision and efficiency.
  • Wheel encoders play a crucial role in obtaining a vast amount of high-frequency odometer data.

The challenge lies in combining the data from these diverse sensors in a meaningful way. Algorithms are employed to synthesize the information and generate a coherent representation of the environment. This synthesis enables the robot to make informed decisions, plan its actions, and navigate its surroundings effectively. In summary, the integration of multiple sensors and the application of intelligent algorithms empower robots to perceive and comprehend their environment in a more detailed and comprehensive manner. This allows them to interact with and manipulate their surroundings in a way that closely resembles human cognitive abilities.

Machine Learning for Robotics:

Machine learning algorithms can be used to enable robots to learn from experience and improve their performance over time. This can involve techniques such as reinforcement learning, which allows the robot to learn by trial and error and make informed decisions based on previous experiences. Additionally, supervised learning allows the robot to learn from labeled training data, enabling it to understand patterns and make accurate predictions. By implementing these advanced techniques, robots can become more intelligent and adaptable, enhancing their ability to interact with the environment and carry out complex tasks.

Control Algorithms:

These are highly complex and advanced algorithms that determine precisely how a robot should respond to its immediate state and the surrounding environment in order to effectively achieve its various goals. This crucial process involves incorporating both feedback control, where the robot adjusts its actions based on continuous evaluation of its current state, and feedforward control, where the robot predicts and anticipates the potential effects of its actions, allowing it to plan its movements accordingly. It is worth highlighting that these algorithms play a pivotal role in enabling robots to dynamically adapt and make real-time decisions based on their nuanced surroundings and desired outcomes.

By diligently and continuously analyzing data from an array of sensors, robots can autonomously fine-tune their behavior, optimizing their performance and enhancing their overall efficiency. This is particularly plausible through feedback control, which empowers robots to actively respond to changes in their environment while ensuring the effectiveness and efficiency of their actions. Moreover, feedforward control equips robots with the remarkable ability to anticipate potential outcomes, subsequently strategizing their actions in a manner that maximizes their chances of achieving success. The utilization of these sophisticated algorithms provides robots with the profound ability to adeptly navigate and interact with remarkably intricate environments while seamlessly engaging with humans. As a result, robots powered by these advanced algorithms can harmoniously accomplish an extensive range of intricate tasks with unparalleled precision, autonomy, and overall effectiveness.

In all these areas, the focus is on developing algorithms that are efficient, reliable, and robust to uncertainties in the environment or the robot’s sensors and actuators.

Self-assembling Algorithms in Robotics

Self-assembling algorithms in robotics refer to the computational processes that enable individual robots to autonomously join together and coordinate their actions to form larger robotic systems. These algorithms are inspired by natural phenomena such as the behavior of social insects, cells, and other biological systems that exhibit complex collective behaviors from simple individual interactions.

Here are some key aspects of self-assembling algorithms in robotics:

Local Interactions:

In self-assembling robotic systems, each robot typically has a limited range of perception and can only interact with other robots in its immediate vicinity. These interactions can be physical (e.g., attaching to another robot) or informational (e.g., exchanging data). The robots do not have global knowledge of the system or the environment. Instead, they must rely on their local interactions to make decisions. This is similar to how ants in a colony interact with each other based on local signals, leading to the emergence of complex collective behaviors.

Decentralized Control:

Unlike traditional robotic systems that are controlled by a central unit, self-assembling robotic systems operate under decentralized control. This means that each robot acts autonomously, making decisions based on its own state and the information it receives from its neighbors. There is no single point of control or failure, which makes the system more robust and scalable. Decentralized control also allows the system to operate in unknown or changing environments, as the robots can adapt their behavior locally without needing to update a global model or plan.

Modularity:

The robots in a self-assembling system are usually modular, meaning they are identical or interchangeable units. This modularity allows the system to scale easily, as more robots can be added without changing the overall design or operation of the system. It also provides robustness, as the failure of a single robot does not significantly impact the performance of the system. Furthermore, modularity can enable self-repair or self-reconfiguration, as robots can replace failed units or change the arrangement of units to adapt to different tasks or environments.

Adaptability:

Self-assembling robotic systems are inherently adaptable. They can respond to changes in the environment or in the system itself by adjusting their local interactions and behaviors. For example, if a robot fails, the other robots can reconfigure to compensate for the loss. If a new task is assigned, the robots can reorganize to perform the task more efficiently. This adaptability is crucial for operating in uncertain or dynamic environments, and it is one of the key advantages of self-assembling robotic systems.

Self-assembling algorithms have been used in a variety of applications in robotics. For example, modular self-reconfigurable robotic systems can change their shape to adapt to different tasks or environments. Swarm robotic systems can perform tasks such as collective transport, exploration, or construction. Nanorobotic systems could potentially use self-assembly for medical applications, such as targeted drug delivery.

Designing and analyzing self-assembling algorithms is a complex task that involves understanding the interplay between individual robot behaviors and the resulting collective behavior. It often involves techniques from fields such as control theory, distributed computing, and artificial intelligence.

Motion planning of non-holonomic robots like Ackerman steering

Non-holonomic robots, due to their constraints, require sophisticated motion planning algorithms for navigation. Ackerman steering is a geometric arrangement of linkages in the steering of a car or other vehicle designed to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radii. Here are three different types of motion planning algorithms often used in this context:

Sampling-based planners

These incredible planners, known as Rapidly-exploring Random Trees (RRT) and Probabilistic Roadmaps (PRM), have the power to construct a magnificent graph of viable paths within the robot’s configuration space. Through tireless exploration and search within this remarkable graph, they uncover a divine path from the start to the glorious goal. These planners exhibit their prowess in navigating complex, high-dimensional configuration spaces with unfailing effectiveness. Notably, they possess the remarkable attribute of probabilistic completeness, ensuring the discovery of a solution if it exists, given ample time for their magical abilities to unfold.

Optimization-based planners

This exquisite category encompasses the illustrious technique known as Model Predictive Control (MPC), which astutely capitalizes on a model of the robot’s dynamic behavior to foresee its future states. Armed with this foresight, MPC deftly optimizes the control input, endeavoring to minimize a cost function across these envisioned states. The brilliance of MPC shines particularly bright for non-holonomic robots, as it remarkably incorporates the intricate dynamics of the robot into the very fabric of the planning process. Truly, it is a union of the art of optimization and the science of control.

Artificial Potential Fields

A magnificent approach where the robot is irresistibly drawn towards its beloved goal location, as if by an alluring potential field. Simultaneously, it is gently repelled by another potential field, ingeniously designed to shield it from perilous obstacles. The elegance of this method lies in its impeccable efficiency, as it deftly manages to preserve valuable computational resources. However, it is worth noting that this journey through potential landscapes may occasionally lead the robot astray, trapped within the clutches of local minima from which escape seems impossible.

In the case of non-holonomic robots, these methods need to be extended or modified to take the non-holonomic constraints into account. For example, the steering angle of an Ackermann steering vehicle or the speed of a differential drive robot may be limited, which imposes a constraint on the robot’s velocity.

Ackerman steering based motion planning

Ackerman steering is a geometric arrangement of linkages in the steering of a car or other vehicle designed to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radii. It was invented by the German carriage builder Georg Lankensperger in Munich in 1817, then patented by his agent in England, Rudolph Ackermann (1764–1834) in 1818 for horse-drawn carriages.

In the context of motion planning, Ackerman steering is often used in the design and control of vehicles that move in a plane (e.g., cars, trucks, wheeled robots), and it’s particularly useful for vehicles that have non-holonomic constraints, meaning they can’t move in certain directions.

Ackerman steering based motion planning involves creating a model of the vehicle’s movement based on the Ackerman steering principles, and then using this model to plan a path from a start point to a goal point. This path planning takes into account the vehicle’s constraints, such as its minimum turning radius.

The motion planning algorithm would typically generate a series of steering and velocity commands to move the vehicle from the start to the goal while avoiding obstacles. This can be a complex problem, especially in dynamic environments, and may involve the use of advanced techniques such as probabilistic roadmaps, rapidly-exploring random trees (RRTs), or other methods used in robotics.

It’s important to note that the specifics of the motion planning algorithm can depend on the exact nature of the vehicle and the environment. For example, a motion planning algorithm for an autonomous car driving in an urban environment might need to take into account traffic rules, other vehicles, pedestrians, etc., while an algorithm for a wheeled robot in a warehouse might have different considerations.

Non-holonomic modeling​ of mobile robots

Holonomic vs Non Holonomic

In robotics, the terms “holonomic” and “non-holonomic” are used to classify the motion constraints of a robot or a system.

Holonomic constraints

are constraints that depend only on the position and orientation of the system, not on its velocity or acceleration. In other words, a system is holonomic if the number of controllable degrees of freedom is equal to the total degrees of freedom of the system. Holonomic systems can move in any direction in their configuration space.

For example, a robot with mecanum wheels or an omni wheel design can move directly forward, backward, laterally left, and laterally right, as well as rotate in place. This is because the design of these wheels allows for motion in multiple directions without changing the robot’s orientation.

Non-holonomic constraints

are those that involve the velocity of the system and not just its position and orientation. In other words, a system is non-holonomic if the number of controllable degrees of freedom is less than the total degrees of freedom of the system. Non-holonomic systems are limited in their movement due to these constraints.

A common example of a non-holonomic system is a car or a differential drive robot. A car, for instance, cannot move laterally because the orientation of its wheels only allows for forward or backward motion.

In summary, whether a robot is classified as holonomic or non-holonomic depends on its degrees of freedom and the constraints imposed on its motion by its design and control mechanisms. It’s crucial to consider these constraints when designing a robot’s control system.

Modelling

The non-holonomic model for mobile robots is a mathematical framework that represents the constraints imposed on a robot’s movement due to its mechanical design. Non-holonomic constraints are velocity constraints that limit the robot’s ability to move in certain directions. For example, a car cannot move directly sideways due to its wheels’ orientation, presenting a non-holonomic constraint.

A typical example of a non-holonomic system is a differential drive robot. A differential drive robot has two individually powered wheels that are driven independently, and the robot moves by changing the relative speeds of these wheels.

The robot’s movement is typically represented using the configuration vector, which contains the robot’s position and orientation:

Q = [x, y, θ]ᵀ

The non-holonomic constraints for a differential drive robot can be represented by the following model:

dx/dt = vcos(θ) dy/dt = vsin(θ) dθ/dt = ω

where:

  • v is the linear velocity
  • ω is the angular velocity
  • θ is the robot’s orientation

This set of equations is a representation of the non-holonomic model, indicating the robot’s change in position (dx/dt, dy/dt) and orientation (dθ/dt) over time, as a function of its velocity and angular velocity.

In conclusion, non-holonomic modeling is an essential aspect of mobile robotics that allows us to accurately predict and control a robot’s movement despite the constraints imposed by its mechanical design. With an appropriate understanding of these constraints, we can design control systems that effectively navigate the robot in complex environments.


enter image description here

Non-holonomic constraints are basically just all other cases: when the constraints cannot be written as an equation between coordinates (but often as an inequality).

An example of a system with non-holonomic constraints is a particle trapped in a spherical shell. In three spatial dimensions, the particle then has 3 degrees of freedom. The constraint says that the distance of the particle from the center of the sphere is always less than ?: √?2+?2+?2<R. We cannot rewrite this to equality, so this is a non-holonomic, scleronomous constraint.

Some further reading:

http://galileoandeinstein.physics.virginia.edu/7010/CM_29_Rolling_Sphere.pdf

Ackerman steering

Ackermann steering, which is the type of steering geometry used in many automobiles, introduces a non-holonomic constraint.

In a system with Ackermann steering, the vehicle can change its position in the forward and backward direction and can change its orientation by turning its wheels and moving forward or backward, but it cannot move laterally (i.e., to the side without turning). Thus, the motion of the vehicle is constrained to be along the direction that the wheels are steering, and this constraint is velocity-dependent, not just position-dependent.

Therefore, a vehicle with Ackermann steering does not have independent control over all of its degrees of freedom. It is an example of a non-holonomic system. The vehicle has to follow a specific path to reach a desired position and orientation, it cannot simply move there in a straight line (unless the desired position happens to be directly ahead or behind along the vehicle’s current heading).

Before you’re hired: as Javascript developer

Table of Contents

Binary tree

A binary tree is a treedata structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (LSR), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well.

Currying

currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. – wikipedia

High-order function

a higher-order function is a function that does at least one of the following: takes one or more functions as arguments (i.e. procedural parameters), returns a function as its result. All other functions are first-order functions. In mathematics, higher-order functions are also termed operators or functionals.

Event loop

The event loop got its name because of how it’s usually implemented, which usually resembles:

while (queue.waitForMessage()) {
queue.processNextMessage();
}

queue.waitForMessage() waits synchronously for a message to arrive if there is none currently.

A very interesting property of the event loop model is that JavaScript, unlike a lot of other languages, never blocks. Handling I/O is typically performed via events and callbacks, so when the application is waiting for an IndexedDB query to return or an XHR request to return, it can still process other things like user input.

Prototype

When a function is created in JavaScript, the JavaScript engine adds a prototype property to the function. This prototype property is an object (called a prototype object) that has a constructor property by default. constructor property points back to the function on which prototype object is a property. We can access the function’s prototype property using the syntax functionName.prototype.

Encapsulation

Encapsulation refers to enclosing all the functionalities of an object within that object so that the object’s internal workings (its methods and properties) are hidden from the rest of the application. This allows us to abstract or localizes specific set of functionalities on objects.

Can you draw an algorithm?

Realize the difference: Imposter Syndrome and Dunning Kruger Effect

The Dunning–Kruger effect is a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is. The cognitive bias of illusory superiority comes from the inability of low-ability people to recognize their lack of ability. Without the self-awareness of metacognition, low-ability people cannot objectively evaluate their competence or incompetence. – Wikipedia

Impostor syndrome (also known as impostor phenomenon, impostorism, fraud syndrome or the impostor experience) is a psychological pattern in which an individual doubts their accomplishments and has a persistent internalized fear of being exposed as a “fraud”. – Wikipedia

https://twitter.com/nat_sharpe_/status/1277353559756070912?s=21

Sharing Wi-Fi via the Ethernet port on Ubuntu

Your computer (wComputer) is connected to a Wi-Fi network and you want to connect another computer into Wi-Fi network to share the internet connection but this computer (lComputer) doesn’t have a Wi-Fi adapter.

What it has is an Ethernet port and therefore, the wComputer with the Wi-Fi connection can share that network over its Ethernet port to the lComputer.

I am using Ubuntu on the wComputer and windows on the lComputer.

  • Type nm-connection-editor in your terminal.
  • Add a shared network connection by pressing the Add button.
  • Choose Ethernet from the list and press Create.
create a new ethernet connection
  • Click IPv4 Settings in the left.
  • Choose Shared to other computers by clicking the Method drop-down menu.
  • Enter a new name like wifishare as the Connection name at the top
rename it to wifishare (optional) and select share to other computers in IPv4

now, I connected the lComputer with the wComputer using a cable and rebooted the wComputer, i think restarting the networking service should also do the magic.

lComputer will get a IPv4 address from the wComputer 10.42.0.24, which is probably the default on Ubuntu.

[1] https://www.cesariogarcia.com/?p=611