New Electronics – Automation for the automaters of things

New Electronics – Automation for the automaters of things

The increased complexity of the IoT environment may signal the time to optimize development coding regimes. By Chris Edwards.

What is different about code development for Internet of Things (IoT) devices? On one level, not much. But when you consider what they fit into, the situation seems much more complex.

An individual device can perform relatively simple operations but be part of a complex system of systems. Each device must be easily accessible and not only protected against hacking, but also provide an easy way for the hacker to enter a network. And as part of that focus on security, being able to receive patches in the form of over-the-air (OTA) updates.

The need to maintain some guaranteed level of security, deal with a variety of different use cases, and design systems that can cooperate when thousands of devices are involved present problems that are not necessarily due to bugs in the embedded code but to assumptions about behavior that doesn’t scale well, or even leads to deadlocks and devices becoming electronic bricks.

One way to approach such systems is to move to agile development where the system starts with a minimal set of features, but is designed in such a way that more and more capabilities can be added over time. “The fundamental premise of Agile is to put out something that can be shipped,” says medical device software consultant Jeff Gable, who co-hosts the Agile Embedded podcast with embedded development consultant Luca Ingianni.

To streamline its agile processes, the cloud computing community came up with the concept of “devops”: the automation of almost the entire build process. It’s an approach that led to the oft-cited example of Amazon’s deployment system: Platform analytics director Jon Jenkins claimed a decade ago that developers were rolling out updates somewhere in the giant’s web services and systems. retail every 11.6 seconds on weekdays.

In a typical DevOps pipeline, once code is checked in, it goes into a queue of batches of unit tests and static analyses. Some can be done immediately, while others wait for nightly downtime to run, all under the management of automation tools such as the open source Jenkins environment. If the tests pass successfully, the result could be an automated nightly build that can be deployed for system-level testing or, if it’s a release candidate, issued for deployment.

A need for discipline

DevOps for Agile requires discipline, Gable says, with everything in the pipeline available through programmable command-line tools. If the pipeline requires someone to open the GUI of an IDE, it won’t work in a devops pipeline.

That discipline can be difficult to achieve, especially if static analysis tools throw up a series of warnings for a problem that isn’t easy to fix right away. But ignoring warnings or mysteriously failing build processes tends to pile up problems for the future that will hurt schedules when deadlines get more pressing. Proponents claim that as long as teams adhere to the discipline, they will achieve tangible benefits. Many bugs, assuming tests hit a decent amount of cover points, will be fixed quickly. Automated regression tests ensure they don’t reappear without warning.

“We definitely see customers in all segments of the embedded market moving toward a more developer-oriented way of working,” says Anders Holmberg, CTO of tools provider IAR Systems. “Perhaps it’s not so much the ability to offer OTA updates per se, but a way to optimize ways of working and use tool investments in the best possible way in an organization. A side effect of this is, of course, the ability to respond faster to security issues and roll out updates to users in a controlled manner.”

Since its introduction of devops, the scope has begun to expand to encompass security with the even more ungainly contraction “devsecops”. This isn’t just some Powerpoint engineering where everyone agrees to put some thought into security when entering some code.

In 2019, the US Department of Defense published its own recommendations for devsecops on critical systems that Wind River, among others, uses to inform their own work on devsec pipelines for embedded systems and IoT.

The core idea of ​​devsecops is to use the same test-driven pipeline to make security awareness a core part of application development instead of performing final code analysis when it might be too late to make breaking changes. Static analysis can look for common problems, such as vulnerability to buffer overflows.

Similarly, the chain can verify binaries at runtime, automatically signing code modules and possibly even encrypting them during each and every build process, even if the project is at an early stage. Addressing this early on can help troubleshoot deployment issues to secure your hardware. It also supports the ability to audit code for evidence of possible compromises in the source code.

Holmberg says that the security modules provided by IAR’s subsidiary Secure Thingz, which include the C-Trust code encryption system, can be integrated via command-line integration into a devops pipeline.

One problem with developers in the embedded and IoT world is that developers often don’t have the benefits of software containers that make it easy to deploy binaries to a wide variety of servers and expect them to run smoothly. Hardware differences play a much bigger role and can easily dominate the tests.

simulation tests

For many of the builds that go through the pipeline, testing the code on the final destination hardware can be a poor choice due to the time it takes to flash and the difficulty of performing the kinds of regression tests that support continuous integration. .

Instead, simulations on workstations or cloud servers can take more load for tests performed after commits or during nightly runs. The hardware itself ends up being reserved for situations where engineers want to work with real-time, real-world I/O in hardware configurations in the loop. In the middle are so-called dashboard farms, which are similar to device farms used by mobile app teams to test a variety of smartphones. Plate farms use hardware that resembles the target organized in racks that can be accessed remotely.

Although board farm users have largely had to build their own infrastructure to manage the hardware, some companies have begun to push for some level of standardization.

Timesys and Sony, for example, have proposed an application programming interface (API) for sending and retrieving data from dashboard farm devices.

For virtual targets running on servers, there has been a lot of cross-pollination from the electronic design automation (EDA) community, where simulation is the mainstay of development.

“When we started 12 or 13 years ago, everyone was doing hardware simulation to get the SoC to work,” says Simon Davidmann, president of Imperas, a company that creates software models of processor cores for simulations. “We founded Imperas to bring these EDA technologies to the world of software developers.”

Arm has similarly supported SoC designers with device blueprints and has begun to move into direct support for IoT development with its Arm Virtual Hardware program, launched at its annual developer conference last fall. Currently available for Arm Cortex-M55, the processor designer may extend it to other cores in the family in the future.

DevOps for embedded and IoT is still in its infancy and adoption requires significant early efforts, although consultancies such as Dojo Five and toolchain providers such as IAR and Wind River are supporting their offerings. But automating builds and analyzes is another cloud technique that can prevent IoT projects from being overwhelmed by complexity.

Leave a Comment