Accelerators vs. Crutches: The Role of LLMs in Research and Learning

Roof texture pattern

Large Language Models (LLMs) have stormed into the research and learning landscape with incredible promise. As an AI researcher and educator, I often find myself torn between excitement and caution. On the one hand, LLMs like ChatGPT and its peers are astonishingly powerful – capable of generating code, explanations, and ideas in seconds. On the other hand, I worry about a growing temptation to outsource thinking to these models. I approach this discussion with strong opinions, loosely held – I’ll state my views confidently, but I’m ready to adjust them as I learn more. In that spirit, let’s explore how LLMs can be amazing accelerators for experienced users while potentially becoming crutches that hamper genuine learning among students.

LLMs: Accelerators for Experts, Not Shortcuts for Students

It’s clear that expert researchers and developers can leverage LLMs as productivity boosters, while less experienced students might misuse them as cheat codes. An expert programmer, for example, might use an LLM to save time on boilerplate code or to get a quick refresher on an API – essentially treating the LLM as an accelerator. Because of their strong foundation, they can critically assess the AI’s suggestions and integrate them appropriately. In contrast, a student still learning the ropes might be tempted to have the LLM do their homework. This is where the alarm bells ring: if a beginner relies on the AI to solve problems for them, they skip the struggle that builds intuition and skill. One experienced engineer put it succinctly: you still need solid fundamentals and intuition when using an LLM, otherwise “you’re at the same whims of copypasta that have always existed” . In other words, without prior knowledge, a student using an LLM may simply copy answers without understanding – the age-old trap of imitation without comprehension, now turbocharged by AI.

Students need to work through problems independently – an essential part of learning that no AI should replace. In my teaching philosophy, I emphasize that struggle isn’t a bug; it’s a feature of learning. Wrestling with a tough math problem or debugging code for hours can be frustrating, but it develops critical problem-solving skills and deep understanding. If a student bypasses that process by asking an LLM for the answer, they might get the solution right now but lose out in the long run. Recent research backs this up: high schoolers who used ChatGPT to help on practice problems solved more exercises correctly in practice, but scored significantly worse on the real test . Why? Because the AI became a crutch – the students weren’t developing their own problem-solving muscles. The researchers bluntly titled their paper “Generative AI Can Harm Learning,” noting that students with AI assistance often just asked for answers and failed to build skills on their own. This cautionary tale reinforces my stance: students need to learn to think, not just to prompt an AI.

Debugging the AI: The Challenge of Trusting LLM Outputs

One big hurdle with LLM-generated content is debugging or verifying it – especially for beginners. When an LLM writes code or explains a concept, it does so with supreme confidence (and no indication of uncertainty). A novice might be completely oblivious if that code has a subtle bug or the explanation has a slight error. The illusion of correctness is strong – the answer sounds authoritative. Seasoned experts usually approach AI output with healthy skepticism: they test the code, double-check facts, and use their experience to sniff out nonsense. A beginner, however, might take the output at face value and run into trouble when things don’t work. Debugging someone else’s (or something else’s) solution can be harder than doing it yourself from scratch. I’ve seen students struggle to fix code that “the AI told them would work,” feeling lost because they don’t understand the solution enough to tweak it. This scenario can be more time-consuming and discouraging than if they’d tried it themselves initially.

In the coding context, a darkly funny saying is circulating: “AI isn’t a co-pilot; it’s a junior dev faking competence. Trust it at your own risk.”. That captures the situation well – LLMs sound confident but can make rookie mistakes. For an experienced coder, the LLM is like a junior assistant who needs supervision. For a novice, that “assistant” might confidently lead them off a cliff regarding a buggy approach or a misinterpreted concept. The challenge, then, is teaching learners to not blindly trust LLM outputs. They must learn to ask: “Does this answer make sense? Why does this code work (or not)? Can I verify this claim?” Without that critical eye, using LLMs can become an exercise in blind faith – the opposite of the analytical mindset we aim to cultivate in education.

Quick Answers vs. Deep Understanding

There’s also a qualitative difference between skimming an AI-generated answer and engaging in deeper learning through traditional resources. I’ll admit, it’s incredibly tempting to fire off a question to ChatGPT and get a neatly packaged answer, rather than digging through the official documentation or searching forums. It saves time in the moment. But I’ve found that what I gain in speed, I often lose in depth. Reading official documentation or a detailed StackOverflow thread might take longer, but it exposes me to the why and how, not just the what. Often, those sources include caveats, different viewpoints from commenters, or related tips that an LLM’s single answer might gloss over.

In fact, one developer quipped that LLMs are a more fluent but also more lossy way of interfacing with Stack Overflow and tutorials. The AI might give you a quick synopsis of what’s out there, but it can lose nuance and detail – kind of like reading only the summary of a long discussion. When a topic is well-covered on forums or documentation, an LLM can indeed fetch a quick answer tailored to your question. However, if the topic is unusual or not well-represented in the training data, the AI may give you irrelevant or incorrect info (“circular nonsense,” as that developer said). By contrast, if you take the time to read through documentation or ask peers, you can usually piece together a correct solution and also understand the context. I personally recall many times when slogging through a tricky manual or a long Q&A thread taught me things I didn’t even know I was missing. Those “Aha!” moments often come from the deeper dive, not the quick skim. So, while LLMs can serve up answers on a silver platter, there’s a richness in traditional learning methods that we shouldn’t lose. Quick answers have their place – especially when you already grasp the fundamentals – but for true learning, there’s no real substitute for digging in and doing the reading and thinking.

LLMs as a Tool, Not a Necessity

All that said, I’m not advocating we throw out LLMs entirely. Far from it! I see them as valuable tools – just not essential ones for learning. We should treat access to LLMs similarly to how we treat access to power tools in a workshop. A power drill is incredibly useful and can speed up construction, but a good carpenter still needs to know how to use a hand tool and understand the building principles; not every task requires the electric drill. Likewise, an LLM can accelerate certain tasks for a knowledgeable user, but a student should first learn how to hammer nails (solve problems) by hand before reaching for the power drill of AI. If one doesn’t understand the basics, the fancy tool can be detrimental or dangerous.

It’s also worth noting that LLMs are a convenience, not a right or requirement. Many of us learned skills and completed research long before these models existed. Students today should view ChatGPT or similar AI as an optional aid that can occasionally help clarify or inspire – not as a default first step for every problem. In fact, sometimes, I encourage students to pretend they don’t have access to an AI assistant to simulate real problem-solving conditions. If they get truly stuck after earnest effort, then using an LLM as a tutor or to get a hint is fine. But immediately resorting to the AI at the first sign of difficulty can become a bad habit.

Another practical aspect is that the AI landscape is rapidly evolving and chaotic. There’s a diversity of tools available now, with new models and versions coming out all the time – and it’s not apparent which one is best for a given task. For example, today we have:

OpenAI’s GPT-4 (ChatGPT) is a leading model known for its strong capabilities. It is often used for complex tasks (though it requires a subscription or access and has limits).

Between me writing this post and publishing it, Grok.com model version 3 became generally available, and it beat out all other models in every benchmark.

Google Bard – another conversational AI that is freely accessible and integrates some up-to-date information but might lag in coding accuracy compared to GPT -4.

Anthropic’s Claude – an AI with a very large context window (great for feeding lots of text) but less commonly available.

Open-source LLMs (e.g., LLaMA 2, Falcon, etc.) are models you can run or fine-tune yourself. They offer flexibility and privacy but require technical know-how and often lack the quality of top-tier models.

This zoo of AI models means that there’s no single standard tool everyone must use – and each comes with trade-offs. The “best” choice can depend on the task, personal preference, or even ethical considerations (like keeping data private). Given this uncertainty, I view LLMs as helpful accessories in learning rather than core essentials. If a student doesn’t have access to GPT-4, it’s not the end of the world – they can still learn effectively via textbooks, websites, and their reasoning. Conversely, having the fanciest AI doesn’t automatically make one a better learner or researcher; it always depends on how you use it.

Guidelines for Use: Finding the Balance

So, what’s the practical way to balance leveraging LLMs and ensuring real learning happens? My current recommendation (admittedly a strong opinion, loosely held) is a somewhat conservative approach: do not provide students automatic access to LLMs in official learning settings, but don’t ban them outright. In our lab, for instance, I don’t hand out ChatGPT accounts or integrate an AI into the core curriculum. Students are expected to grapple with assignments using their minds (and resources like books, notes, and the internet). If they choose to consult an LLM on their own time, that’s their decision – but we don’t encourage it or build our teaching around it. This approach conveys that we value the learning process over just getting the answer. It also avoids any notion that using the AI is “required” or officially endorsed, which might make some students uncomfortable or overly reliant.

For researchers and more advanced learners, I take a case-by-case stance. In research, time is precious, and the problems are often open-ended. If an LLM can speed up a literature review, help brainstorm experimental designs, or even generate some boilerplate code for data analysis, I’m open to its use. The key is that the researcher must remain in the driver’s seat. We evaluate: Does using the LLM meaningfully benefit the project? Are we double-checking whatever it produces? In some cases – for example, when writing a quick script to format data – using AI is a harmless shortcut. In other cases – like deriving a critical formula or core algorithm – it might be too risky to trust the AI or simply crucial for the researcher’s growth to solve it manually. We also consider ethical and privacy factors: feeding proprietary research data into a public AI may be a bad idea, for example. There’s no one-size-fits-all rule here; it requires judgment. But broadly, students and beginners get more restrictions (for their own good), while experienced folks have more leeway with careful oversight.

(I am writing this while I have the capability to infer at 8 tokens per second on my own Deepseek R1 model – so privacy can be debated.)

My Personal Approach (Strong Opinions, Loosely Held in Action)

To lay my cards on the table: I’m not just preaching from an ivory tower – I actively grapple with how I use LLMs in my daily work. I’ll share a personal approach that reflects the balanced philosophy I’ve been discussing. I do use ChatGPT (and other LLMs) as a kind of writing and brainstorming partner. For example, when drafting narration scripts for a presentation or exploring how to phrase a concept clearly, I’ll have a chat with the AI. It’s fantastic for generating a few different ways to explain an idea, or even giving me a rough narrative flow that I can then personalize. In those cases, I treat the LLM like a collaborator who helps me articulate thoughts – it genuinely accelerates my work without detracting from my understanding (since the experience must first come from me to guide the AI).

However, when it comes to coding or solving research problems, I deliberately stay “hands-on”. If I’m learning a new programming language or tackling a tricky bug in my code, I resist the urge to ask the AI to fix it for me. I’m refining my skills and reinforcing my knowledge by working through it myself. It’s slower and sometimes more painful, but it’s the productive struggle I believe in. I often remind myself that every error I debug and every algorithm I write from scratch is an investment in my future abilities. Using AI to do those things for me would feel like cheating myself out of learning. So in practice, I use LLMs for acceleration in areas where I’m already competent(writing, summarizing, ideating), and avoid using them as a crutch in areas where I’m still growing (learning new tech, building intuition in a domain). This personal rule of thumb has served me well so far – it lets me enjoy the benefits of AI augmentation without losing the satisfaction (and long-term benefits) of doing hard things on my own.

Conclusion: Proceed with Caution and Curiosity

In closing, I maintain that LLMs are potent allies in modern research and learning – but we must engage with them thoughtfully. As an educator, I want to produce thinkers and problem-solvers, not just people who can play “20 questions” with an AI until it spits out an answer. The long-term effects of widespread LLM use on learning and cognition are still unknown. Will students of the future struggle to think independently if they grow up always consulting an AI? Or will they reach even greater heights by offloading tedious tasks to machines and focusing on creativity? We don’t fully know yet. That uncertainty is precisely why a critical, go-slow approach feels right to me at this stage. Let’s use these fantastic tools and constantly ask ourselves why and how we’re using them. By holding our strong opinions loosely, we stay open to change: if evidence down the line shows that deeper LLM integration improves learning without drawbacks, I’ll happily adapt. This is written from the vantage point of being exposed to ChatGPT3 through all models since March 2025, not vantage enough at all in the scale of AI model development. Until then, I’ll continue championing a balanced path – one where human intuition, struggle, and insight remain at the center of learning, with AI as a supportive sidekick rather than the star of the show. After all, the goal as an educator is to get answers faster and cultivate minds that can understand and question the world – with or without an LLM whispering in our ear.

Non-holonomic modeling​ of mobile robots

Holonomic vs Non Holonomic

In robotics, the terms “holonomic” and “non-holonomic” are used to classify the motion constraints of a robot or a system.

Holonomic constraints

are constraints that depend only on the position and orientation of the system, not on its velocity or acceleration. In other words, a system is holonomic if the number of controllable degrees of freedom is equal to the total degrees of freedom of the system. Holonomic systems can move in any direction in their configuration space.

For example, a robot with mecanum wheels or an omni wheel design can move directly forward, backward, laterally left, and laterally right, as well as rotate in place. This is because the design of these wheels allows for motion in multiple directions without changing the robot’s orientation.

Non-holonomic constraints

are those that involve the velocity of the system and not just its position and orientation. In other words, a system is non-holonomic if the number of controllable degrees of freedom is less than the total degrees of freedom of the system. Non-holonomic systems are limited in their movement due to these constraints.

A common example of a non-holonomic system is a car or a differential drive robot. A car, for instance, cannot move laterally because the orientation of its wheels only allows for forward or backward motion.

In summary, whether a robot is classified as holonomic or non-holonomic depends on its degrees of freedom and the constraints imposed on its motion by its design and control mechanisms. It’s crucial to consider these constraints when designing a robot’s control system.

Modelling

The non-holonomic model for mobile robots is a mathematical framework that represents the constraints imposed on a robot’s movement due to its mechanical design. Non-holonomic constraints are velocity constraints that limit the robot’s ability to move in certain directions. For example, a car cannot move directly sideways due to its wheels’ orientation, presenting a non-holonomic constraint.

A typical example of a non-holonomic system is a differential drive robot. A differential drive robot has two individually powered wheels that are driven independently, and the robot moves by changing the relative speeds of these wheels.

The robot’s movement is typically represented using the configuration vector, which contains the robot’s position and orientation:

Q = [x, y, θ]ᵀ

The non-holonomic constraints for a differential drive robot can be represented by the following model:

dx/dt = vcos(θ) dy/dt = vsin(θ) dθ/dt = ω

where:

  • v is the linear velocity
  • ω is the angular velocity
  • θ is the robot’s orientation

This set of equations is a representation of the non-holonomic model, indicating the robot’s change in position (dx/dt, dy/dt) and orientation (dθ/dt) over time, as a function of its velocity and angular velocity.

In conclusion, non-holonomic modeling is an essential aspect of mobile robotics that allows us to accurately predict and control a robot’s movement despite the constraints imposed by its mechanical design. With an appropriate understanding of these constraints, we can design control systems that effectively navigate the robot in complex environments.


enter image description here

Non-holonomic constraints are basically just all other cases: when the constraints cannot be written as an equation between coordinates (but often as an inequality).

An example of a system with non-holonomic constraints is a particle trapped in a spherical shell. In three spatial dimensions, the particle then has 3 degrees of freedom. The constraint says that the distance of the particle from the center of the sphere is always less than ?: √?2+?2+?2<R. We cannot rewrite this to equality, so this is a non-holonomic, scleronomous constraint.

Some further reading:

http://galileoandeinstein.physics.virginia.edu/7010/CM_29_Rolling_Sphere.pdf

Ackerman steering

Ackermann steering, which is the type of steering geometry used in many automobiles, introduces a non-holonomic constraint.

In a system with Ackermann steering, the vehicle can change its position in the forward and backward direction and can change its orientation by turning its wheels and moving forward or backward, but it cannot move laterally (i.e., to the side without turning). Thus, the motion of the vehicle is constrained to be along the direction that the wheels are steering, and this constraint is velocity-dependent, not just position-dependent.

Therefore, a vehicle with Ackermann steering does not have independent control over all of its degrees of freedom. It is an example of a non-holonomic system. The vehicle has to follow a specific path to reach a desired position and orientation, it cannot simply move there in a straight line (unless the desired position happens to be directly ahead or behind along the vehicle’s current heading).

Before you’re hired: as Javascript developer

Table of Contents

Binary tree

A binary tree is a treedata structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (LSR), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well.

Currying

currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. – wikipedia

High-order function

a higher-order function is a function that does at least one of the following: takes one or more functions as arguments (i.e. procedural parameters), returns a function as its result. All other functions are first-order functions. In mathematics, higher-order functions are also termed operators or functionals.

Event loop

The event loop got its name because of how it’s usually implemented, which usually resembles:

while (queue.waitForMessage()) {
queue.processNextMessage();
}

queue.waitForMessage() waits synchronously for a message to arrive if there is none currently.

A very interesting property of the event loop model is that JavaScript, unlike a lot of other languages, never blocks. Handling I/O is typically performed via events and callbacks, so when the application is waiting for an IndexedDB query to return or an XHR request to return, it can still process other things like user input.

Prototype

When a function is created in JavaScript, the JavaScript engine adds a prototype property to the function. This prototype property is an object (called a prototype object) that has a constructor property by default. constructor property points back to the function on which prototype object is a property. We can access the function’s prototype property using the syntax functionName.prototype.

Encapsulation

Encapsulation refers to enclosing all the functionalities of an object within that object so that the object’s internal workings (its methods and properties) are hidden from the rest of the application. This allows us to abstract or localizes specific set of functionalities on objects.

Can you draw an algorithm?

Realize the difference: Imposter Syndrome and Dunning Kruger Effect

The Dunning–Kruger effect is a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is. The cognitive bias of illusory superiority comes from the inability of low-ability people to recognize their lack of ability. Without the self-awareness of metacognition, low-ability people cannot objectively evaluate their competence or incompetence. – Wikipedia

Impostor syndrome (also known as impostor phenomenon, impostorism, fraud syndrome or the impostor experience) is a psychological pattern in which an individual doubts their accomplishments and has a persistent internalized fear of being exposed as a “fraud”. – Wikipedia

https://twitter.com/nat_sharpe_/status/1277353559756070912?s=21

Sharing Wi-Fi via the Ethernet port on Ubuntu

Your computer (wComputer) is connected to a Wi-Fi network and you want to connect another computer into Wi-Fi network to share the internet connection but this computer (lComputer) doesn’t have a Wi-Fi adapter.

What it has is an Ethernet port and therefore, the wComputer with the Wi-Fi connection can share that network over its Ethernet port to the lComputer.

I am using Ubuntu on the wComputer and windows on the lComputer.

  • Type nm-connection-editor in your terminal.
  • Add a shared network connection by pressing the Add button.
  • Choose Ethernet from the list and press Create.
create a new ethernet connection
  • Click IPv4 Settings in the left.
  • Choose Shared to other computers by clicking the Method drop-down menu.
  • Enter a new name like wifishare as the Connection name at the top
rename it to wifishare (optional) and select share to other computers in IPv4

now, I connected the lComputer with the wComputer using a cable and rebooted the wComputer, i think restarting the networking service should also do the magic.

lComputer will get a IPv4 address from the wComputer 10.42.0.24, which is probably the default on Ubuntu.

[1] https://www.cesariogarcia.com/?p=611

Sending and Receiving message between onboard Raspberry Pi BLE and an Ardui​no with HM 11

Goal:

send messages between the HM 11 BLE modules and my Raspberry PI 3 with its onboard BLE.

How:

Bluez comes installed with the raspbian distro.

Wiring is basic UART for connecting the HM 11 to the Arduino. In my case, I am using a Teensy 3.6 programmed using the Arduino IDE.

Use this serial proxy sketch for getting started. if you have Arduino IDE, open this file. Teensy serial ports are on 0 and 1. I kept it software serial so its easy to adapt for any other board by just changing the pins used for the UART.

Thats all you need to setup on the Arduino side. Everything should work now.

May be try a few AT commands. AT should give you OK if the module is used for the first time. if not OK+LOST which means that the module was connected previously which was lost. I would, either way, try the three AT commands just to be on the safer side. AT+RENEW, AT+RESET, AT.

If you make a PR request for adding a new line after every response from the HM 11 module, it would be great

Next step is to set up the raspberry pi, everything should be available directly up setting up an RPi with a new SD card. Be sure that you are using Raspberry Pi 3 and newer versions which have onboard BLE.

With Commandline:

With Python:

you should know the HM 11 Mac address. you should know it if you did it through command line. if not, you can do AT+ADDR? and add “:” after every two characters.

Install system dependencies, Go into the directory, Install the python dependencies. Before starting the python program, change the address that you want to connect. After you did that, start the program, it should send Hello world once to the device and wait for any messages. If you enter in the serial monitor, it will be printed out by the python program.

# sudo apt-get install build-essential python-pip3 git
# git clone https://github.com/akrv/BLEserialbridge
# cd BLEserialbridge
# pip3 install -r requirements.txt
# python3 serial_bridge.py
below is the AT Commands I sent since setting up the module.


Clock synchronisation in systems communication

Motive

Starting with that, Guide to implementing a Clock synchronisation algorithm on low power wireless sensor networks.

following will be some pointers and papers to read before wandering in the wild to get things done and reinventing the wheel.

Literature survey

  • Message Time-stamping in Sensor Networks
  • Flooding time synchronisation protocol [1]
    • this protocol is very good in terms of single of time synchronisation
    • it uses linear regression, get ready to do some compute on your MCU
    • very good numbers and explanation in those slides [2]
    • Global Clock Skew
  • for multi-hop networks, this one is a good paper to read
  • Temperature Compensated Time Sync [4]
    • The only requirement on that time synchronization protocol is that it can calculate the current frequency error with respect to the reference node’s clock. Unlike other time synchronization protocols, TCTS also records the current temperature during a synchronization exchange. Both the temperature and frequency error at the end of the synchronization exchange is cached in a frequency vs. temperature table in memory. [5]
    • Before attempting subsequent resynchronization, TCTS will measure the current temperature and consults its internal calibration table. If the current frequency error for the measured temperature is cached, TCTS will not attempt to resynchronize with the reference node since a new time estimate is not required. Eventually, when all of the operating temperatures have been observed, TCTS will have auto-calibrated the low-stability clock, essentially providing a TCXO timebase. [5]
    • In addition, since we know that the curve should fit a quadratic function of the form

      fe(T) = −A · (T − T0)2 + B,

      where A is the temperature coefficient, T0 = 20◦C represents room temperature, and B is a frequency error offset, we can fit the measured calibration points to this curve and obtain frequency error estimates for previously unobserved temperatures, thus drastically improving the range and accuracy while eliminating a factory calibration step and allowing on-demand calibration. [5]

  • Virtual High-Resolution Time (VHT) [5]
    • basic idea behind VHT
        • During active periods, the high-frequency clock is turned on, and a hardware counter counts the number of high-frequency clock ticks that occur during each low-frequency clock interval, i.e., there are high-frequency clock ticks during each low-frequency one.

          φ0 = fL / fH

        • At the time of the event, the system records not only the value of the counter sourced by the low-frequency clock but also the value of the counter sourced by the high-frequency clock which was reset at the end of the most recent clock tick of the low-frequency clock. Thus, this second timer will indicate the phase φ within a low-frequency clock tick, allowing an effective resolution to be up-sampled to the high-frequency clock (modulo one cycle of jitter). The event time is sampled as

          tevent=CL ·φ0 +φ.

        • While the high-frequency clock is not phase-locked with the low-frequency clock, the phase error is limited to < 1/fH, and thus is of the same order as the quantization error.

Implementation of VHT in microcontrollers

The requirement is quite simple and the target chipset used in this project has additional two timers.

  1. 2 clocks driven by two oscillators
  2. 2 timers with capture and compare modes.

The Capture mode which will retrieve a timer value based on a signal event.

The Compare mode which will constantly monitor a timer counter value and compare it to a value set in the application. When they match, it will trigger an event.

The target implementation is on CC1350 chipset [6]. They have Four General-Purpose Timer Modules (Eight 16-Bit or Four 32-Bit Timers). With newest hardware revision CC1352r, almost same chip can be used to run Bluetooth 5.0. This has a M4 instead of a ARM M3 processor. This optimization with the increase in the memory footprint can be attributed to the new Bluetooth 5.0 standard.

The application framework used will be Contiki-OS [7]. I will use the latest version that is on development which also supports the SensorTag board and the launchpad [8]. A toolchain setup and compilation of the Contiki-OS is done in this post.

Illustration of the Virtual High-resolution Time event time-stamping mechanism on a micro- controller [5].

The two capture are connected to the event. The paper expects to make the interrupt for start of the frame delimited (SFD) which is previously justified for better resolution on clock synchronisation in figure 2 [5].

An additional capture unit on the fast timer captures the counter on every low frequency rising edge. As the Sync event, where the value of h0 is stored in one of the capture units. Later, when the SFD line rises, the two capture units on the two timers store l0 (the value of the low-frequency counter) and wh1 (the value of the high frequency counter), respectively. Using these three captured values, the event time can be calculated as

tevent=l0·φ0+(h1−h0) modφ0.

Counter overflows are an issue to be tackled. Overflows of the high-frequency counter are not a problem since they run only on a short burst when the system is synchronising / listening for packet. The low-frequency clock must have a counter width of 64-bit since 32-bit counters will overflow by 5 minutes since the low-frequency counters are used for long intervals.

As per the suggestion on [5] a 32KHz and 8MHz clocks are used for the timers. The MCU has Clock Speed up to 48-MHz.

References:

[1] http://www.isis.vanderbilt.edu/sites/default/files/Maroti_M_11_3_2004_The_Floodi.pdf

[2] https://www.tik.ee.ethz.ch/file/5562ed988ca095af8331114dd81c6c70/sensys09_slides.pdf

[3] https://people.mpi-inf.mpg.de/~clenzen/pubs/LSW09optimal.pdf

[4] https://ieeexplore.ieee.org/document/5170185/

[5] https://web.eecs.umich.edu/~prabal/pubs/papers/schmid10vht.pdf

[6] http://www.ti.com/lit/ds/symlink/cc1350.pdf

[7] http://contiki-ng.org/

[8] https://github.com/contiki-ng/contiki-ng/wiki/Platform-srf06-cc26xx

 

decentralized and distributed systems

Pointers for distributed vs decentralised:

Some bookmarks to read if you are a distributed systems researcher [1]

  • A decentralized system is a subset of a distributed system.
  • The primary difference is how/where the “decision” is made and how the information is shared throughout the control nodes in the system.
  • Decentralized means that there is no single point where the decision is made.
  • Every node makes a decision for it’s own behaviour and the resulting system behaviour is the aggregate response.
  • A key characteristic of decentral systems is that typically no single node will have complete system information.
  • “Distributed means that the processing is shared across multiple nodes, but the decisions may still be centralized and use complete system knowledge” says coinbase in their blog post
  • A scenario to think of:
    • Some control algorithms I’ve seen for multiple quad-copter control are purely distributed in that a central over-seer gives optimization problems to each copter to solve then return the solution.
    • The over-seer will then issue commands based on the aggregate result.
  • Here’s a philosophical question:
    • if the over-seer is “voted” into office by the nodes, is it decentralized or centralized?
    • I’d have to say decentralized, but it is arguable says MaRi Eagar
    • then I tend to ask what is distributed systems?

Keywords that matter

when you start the debate (here central systems are included): [2]

  • Points of Failure / Maintenance
  • Fault Tolerance / Stability
  • Scalability / Max Population
  • Ease of development / Creation
  • Evolution / Diversity

References:

  1. What is the difference between decentralized and distributed systems?
  2. Centralized vs Decentralized vs Distributed

Compiling Contiki for CC1350

Setup:

  • macOS 10.13.5
  • VM – VirtualBox
  • Guest OS: Ubuntu 16.04

Setup Contiki on the local machine:

Install the following packages. you might need sudo right.

# sudo apt-get remove gcc-arm-none-eabi gdb-arm-none-eabi binutils-arm-none-eabi
# sudo add-apt-repository http://ppa:team-gcc-arm-embedded/ppa
# sudo apt-get update

Compile Contiki hello-world for Native:

Native means that your code will compile and run directly on your host PC.
for me, it will run on the Ubuntu 16.04 OS on the VM.

cd into your development folder.

# git clone https://github.com/contiki-os/contiki
# cd contiki/examples/hello-world
# make TARGET=native
mkdir obj_native
  CC        ../../core/ctk/ctk-conio.c
  CC        ../../platform/native/./contiki-main.c
  CC        ../../platform/native/./clock.c
  CC        ../../core/dev/leds.c
### output truncated ###
  AR        contiki-native.a
  CC        hello-world.c
  LD        hello-world.native
rm hello-world.co

The target command tells the compiler to compile for the current system.
This is what is later changed for cross compiling to our target platform.

Type the following command to run the hello-world program.

# ./hello-world.native
Contiki-3.x-3343-gbc2e445 started with IPV6, RPL
Rime started with address 1.2.3.4.5.6.7.8
MAC nullmac RDC nullrdc NETWORK sicslowpan
Tentative link-local IPv6 address fe80:0000:0000:0000:0302:0304:0506:0708
Hello, world

if this is how it worked! your compilation worked!

Compiling Contiki-OS for CC13xx

The only thing to be changed is the target. make clean is an important step.

make clean
make TARGET=srf06-cc26xx BOARD=sensortag/cc1350

this says the compilation works if you see the following:

CC ../../cpu/cc26xx-cc13xx/lib/cc13xxware/startup_files/ccfg.c
CC ../../cpu/cc26xx-cc13xx/./ieee-addr.c
AR contiki-srf06-cc26xx.a
CC ../../cpu/cc26xx-cc13xx/./fault-handlers.c
CC ../../cpu/cc26xx-cc13xx/lib/cc13xxware/startup_files/startup_gcc.c
CC hello-world.c
LD hello-world.elf
arm-none-eabi-objcopy -O ihex hello-world.elf hello-world.i16hex
srec_cat hello-world.i16hex -intel -o hello-world.hex -intel
arm-none-eabi-objcopy -O binary --gap-fill 0xff hello-world.elf hello-world.bin
cp hello-world.elf hello-world.srf06-cc26xx
rm hello-world.i16hex hello-world.co obj_srf06-cc26xx/fault-handlers.o obj_srf06-cc26xx/startup_gcc.o

This error absolutely means that the installation of the compiler for cross-compilation wasn’t successful.

/bin/sh: 1: arm-none-eabi-gcc: not found

Next step would be to compile with contiki-ng since, there is active development is happening there and to base all developments to that repo.

# git clone https://github.com/contiki-ng/contiki-ng
# cd contiki-ng
# git submodule update --init --recursive

navigate to the hello-world example and hit the following

# make clean make TARGET=srf06-cc26xx BOARD=sensortag/cc1350

References:

[1] https://sunmaysky.blogspot.com/2015/08/setup-6lbr-to-run-6lowpan-with-cc2531.html

[2]https://sunmaysky.blogspot.com/2015/09/contiki-subg-hz-6lowpan-on-cc1350.html