C is the predominant language used by firmware that scales from small microcontroller based systems up to much larger multi-core systems. It is often thought that developing a large system first and then scaling down to a smaller system is the easiest way to go about the task. However, I have seen that developing the smaller system is usually a much more viable approach.

I have written a short story that depicts a possible scenario in an exaggerated way. The intention is to be fun reading while portraying  the significance of my supposition:

A Theatrical Scenario:

Embed from Getty Images

“Well, then why don’t we just develop our new IoT-based Temperature controller on a PC running Linux”, said Bob, shrugging his bony shoulders in an exasperated manner. “That way we can take advantage of these low-cost, high-performance development tools that are so freely available, we don’t even need hardware, we just test it all on our laptops or even on this…” he added conspiratorially, carefully revealing a miniature rectangular shape from one of his cavernous pockets. It reflected the fluorescent lighting and looked strangely silver, even though it was predominantly green – Sam could now see what it was, something that had much of the team in raptures of awe, and even a few up for hours each night preparing for the next Robotics challenge. Yes, most could see what it was – the latest Raspberry Pi.

Sam’s eyes drifted up the wall in contemplation of this new revelation and came to rest on a spider that had somehow made its way out of arms reach, and was now quietly spinning its web between the ceiling and the wall.

Sam’s contemplation was interrupted when Bob added theatrically, “We will be able to sell this initial Raspberry Pi Incarnation as our high-end offering to our biggest customers – they need all these extra high-end features it can so easily offer…”. Silence filled the room as everyone contemplated this brilliant strategy.

Embed from Getty Images

The spell was broken by Sue’s high voice “um – isn’t this a bit expensive  … and the power consumption – it’s an industrial application we were asked to do, installed where no one can easily reach – we can’t have any failures …our companies reputation for quality…” she stammered.

Bob, stood, somehow making himself tower over Sue who was sitting in the opposite corner, put his hands on the table, leaned forward, and looked down through bulbous eyes as he spoke slowly and precisely: “No, Susie – it’s not unreliable, it’s running Linux… I even have a security camera in my back garden running Linux that has never-ever-ever hung. It only costs $35, we just can’t compete with this kind of price. And we can get the current consumption down with a bit of clever design.” he added emphatically.

Sam saw that the spider had now completed its web and had settled perfectly still, clutching its strong invisible threads.

Bob’s large sleeves on his dark coat were now draped open like two hungry mouths as he raised his arms dismissively.” And this is where phase two kicks in…once it’s working perfectly and debugged, a six-month task at the most, porting the code over to our new low-cost industrial microcontroller based system will be the most trivial thing ever. And that’s if it’s even needed…”, he added nonchalantly “…the prices of these SBC things are dropping like stones – and with this kind of reliability, its Linux after all!”…

Some motion caught Sam’s eye as he glimpsed a moth that must have been dazzled by a sliver of sunlight poking through the window, was now helplessly fluttering toward the web…


Embed from Getty Images

I wonder how many of you can imagine how this fictitious scenario ends? How many of you have come across this type of situation? I know this type of thing must have happened countless times.

The unique constraints found in many embedded systems places challenges on one’s ability to scale vertically to a degree not often found in other programming domains, particularly when done downwards. My intuitive guess is that systems that have been scaled successfully have usually been scaled upwards. As the features and firmware grow, the available hardware resources for the microcontroller family fortunately also grows at a similar rate (with time). Sometimes the firmware is ported to a much larger microcontroller, which handles the exercise with relative ease. One isn’t forced to use all the cool features available on the much bigger system from day one.

What Happens when Reality finally sets in:

Embed from Getty Images

Needless to say, late in the development cycle (which took quite a bit longer than six months), this fictitious little development team found that to produce the ultra-competitive, world-class product their market actually needed a highly specialized design, of a scale perfectly in balance with the requirements. All possible optimizations were needed to get the amazing, shining masterpiece the world had become used to from their previous efforts.

Here are some of the scenarios one may be faced with during this type of a rather bumpy ride:

“But this bloatware just doesn’t fit on this smaller microcontroller…”

Nature abhors a vacuum and it often seems like a system hates to have unused resources. At least, programmers love to use up resources as quickly as is comfortable. Very little effort goes into keeping the code small and efficient.

This is what should be seen as an obvious risk, but when developing under pressure one tends to just try to get the system working at all costs. When working on a large system, there is usually no real feedback loop about how well the code will work on a much smaller system, and often no one really cares – the immediate deadline is often looming large and ominous.

Another scenario where one can be trapped with bloat is when a third party module is linked in to do something specific, but it also does a lot more than what is really required. It usually isn’t easy or practical to remove what you don’t want, and doing so is time-consuming, risky, and difficult when there are many dependencies. So you end up importing the whole module.

“but it was such a slippery slope…”

Embed from Getty Images

You find an amazing library that logs every call to a database at microsecond resolution. This you find is great since the primary purpose of the product is also to log data and you were looking for a convenient way to store it – problem solved, use a database. You were going to just store the temperature logs in a ring buffer on the flash and upload it to the server where it could be analyzed. Now that you have a DB, you can also query the data while it is still stored locally. This is just what would give your product that competitive edge because you can now calculate linear regression on the deltas of historical data using a simple query, and predict the future trend. Based on this you could switch on an air cooler relay to get the temperature to stop rising at an optimum point. And to do these heavy calculations, you find a library that already does much of this for you, plus a lot extra – of course…

You can probably see where this is heading. Each time you introduce a heavyweight component, you erode the downward scalability of the firmware in a very subtle, but innocuous way. At some point, you find that getting the firmware to work on a leaner system is a virtually impossible task because of the hobbling effect the overweight design. Not being able to support a DB with the more limited memory, and the inability to number-crunch at a speed comparable to the much heavier system is a simple reality that can be painful to address.

Not being able to support a DB with the more limited memory resources, and the inability to number-crunch at a speed comparable to the much heavier system is a simple reality that can be painful to address.

“but this architecture is so different…”

Rather than trying to swim upstream, you decide to use the architecture as it was intended to be used and embrace services. This is nice because if one service stops, it can be restarted without rebooting the whole system. You can also take advantage of the D-Bus as it provides a cool way to communicate with other applications. Splitting your code up into different applications takes encapsulation to a new level, and means that a complete firmware rebuild can often be avoided. It also means that you can use scripting languages such as Python, Perl, and even Bash to accomplish many otherwise tricky tasks.

Once your larger system is developed, getting it to run on a much smaller microcontroller could require a partial rewrite, wrapping interfaces, simulating architectural machinery found on the bigger system and converting scripts written in different languages into C/C++. What would have fitted on a small low-cost microcontroller, now ends up requiring a larger, more power-hungry microcontroller, costing more than it would otherwise have cost. Downward scalability has been eroded.

“We just don’t have the memory to run so many different threads”

Developers often create and use new threads with no restraints when using a preemptive operating system. Doing so is very nice for the programmer since it offers a lot of encapsulation. However, it is very expensive and inefficient in terms of memory usage and processing resources. A very large microprocessor-based embedded system usually won’t even blink an eye when faced with this style of programming, since it has seemingly infinite amounts of resources.

However, when you try to scale down your hardware resources, this becomes a real limiting factor. I have found that a multi-threaded application can easily require an order of magnitude more RAM than a single threaded one. Processing efficiency is also reduced and code size tends to be higher.


Embed from Getty Images

Making code scalable is challenging. However, I have seen that scaling firmware downwards is a lot more difficult than scaling it upwards. I have covered some of the more subtle pitfalls that unsuspecting and naive development teams may find themselves faced with when attempting this.

I am sure it is possible to develop code that scales down when one has a lot of experience, discipline, and process to guide you. However, that is never an easy task. If the scalability requirements are known upfront, I would encourage one to start with the smallest system and scale upwards. Alternatively, one could also tackle the issue by simultaneously develop both the smallest and largest systems.

I am sure there are many scenarios that have been encountered similar to one painted above. It would be interesting to hear anecdotes relating to attempts to scale firmware downwards.


Coroutines to Reduce Firmware Cost and Complexity. | Firmware Programming · March 29, 2016 at 7:26 pm

[…] I am aware of have up to one hundred threads. This all adds up and as I have mentioned in my previous post, this can easily be ten times what an equivalent single-threaded event-driven system would use. […]

Coroutines to Reduce Firmware Cost and Complexity. - Firmware programming · December 31, 2017 at 3:11 pm

[…] I am aware of have up to one hundred threads. This all adds up and as I have mentioned in my previous post, this can easily be ten times what an equivalent single-threaded event-driven system would use. […]

Why Firmware Doesn’t Scale Downwards Easily – Firmware Programming · July 14, 2018 at 5:59 pm

[…] This article has moved. Please visit it by clicking here. […]

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: