The International Electrotechnical Commission (IEC) does not have a set of standards specifically created for the security of the Internet of Things (IoT). However, it has a series of cybersecurity standards for industrial automation and control systems.
IEC lays out rules designed to support the development and deployment of secure industrial automation and control systems (IACS). These include requirements for secure development lifecycles, coding security, and patch management.
Not many organizations may breeze through complying with these standards, though. Difficulties will likely emerge. Some organizations operate processes with inherent security weaknesses that make compliance challenging.
What is IEC 62443-4-1
IEC 62443-4-1 is the first subsection of part 4 of the IEC 62443 international series of standards. It focuses on the development and maintenance of secure products. It details the requirements for building secure IACS products and components as it provides a specific definition for a secure development lifecycle (SDL).
The standards under IEC 62443-4-1 cover the full lifecycle of products, from design to product end-of-life. It defines or clarifies the parameters of secure design, secure implementation, security verification and validation, the management of product defects, firmware patching, and product end-of-life (EOL) or retirement from the market. It also includes guidelines on secure coding.
The requirements set in IEC 62443-4-1 are applicable to new or currently used processes in the development, maintenance, and retirement of hardware and software, including the firmware used in IoT and other small devices. Following these requirements is the responsibility of product developers and maintainers.
Challenges in compliance
Product developers and maintainers are likely to encounter compliance hurdles mainly because of two reasons: the absence of on-device security and the black box effect. These are mostly applicable to IoT devices and small components used in process automation and system controls.
Lack of on-device security
The overwhelming majority of IACS and IoT products are designed to serve specific purposes. As such, they are not equipped with the hardware present in devices built to have significant memory storage and processing power. Their internal storage, RAM, and processing chip are limited, which makes it impossible to install full-fledged security systems in them.
It is not possible to install in IACS devices advanced security solutions such as endpoint detection and response (EDR) and extended detection response (XDR). In-code solutions like runtime application self-protection (RASP) are also impossible to implement with existing technologies.
These result in the difficulty or complete inability to undertake crucial cybersecurity mechanisms such as the automation of security policies for all processes, the sandboxing and monitoring of suspicious processes, and the generation of alerts during important events like the surge in resource consumption that happens during a DoS or DDoS attack. Additionally, it is not possible to harden existing processes for user authorization and authentication.
The absence of on-device security systems is a major drawback in the development and deployment of secure industrial automation and control systems as well as IoT and small devices. To secure them or at least have a semblance of security efforts, IACS and IoT makers, and maintainers rely on firmware patching. They send out software updates for their devices whenever threats or attacks are identified.
Patching is not fast enough to address the emergence of new cyber attacks, especially with the rapidly evolving nature of cyber threats. Threat actors quickly find new vulnerabilities and come up with creative ways to compromise networks and devices. Before a security patch is developed and sent to devices, a security compromise will have already taken place.
Black box effect
The black box effect is a setup wherein an artificial intelligence system or program submits useful information without divulging details about how it came up with the information it submitted. It does not provide explanations or elaborations regarding the results or conclusions it submits. In other words, there is the information submitted but the recipients do not have the means to dig deep into the processes that went through to produce the information.
This is what happens in the current setup between the device and device makers/maintainers. The latter usually do not have the means to look into the performance of their products, the operational problems encountered, as well as software defects. They may have the means to know if an error took place, but they do not have the ability to learn how the error happened. They have no means to know how their devices were used to arrive at a problem or glitch.
If there is any information available, this is usually insufficient or siloed. The data is likely kept in some form that is not readily usable for diagnostics and debugging. As such, device makers have a hard time understanding the nature of the problems in their devices and coming up with the appropriate solutions.
Specifically, it is difficult to monitor network activity and IP addresses. There are no audit logs of security events in different endpoints and platforms. There is usually no way of detecting unauthenticated or malicious activity and attempts to execute anomalous software. There are no viable mechanisms to identify failed login attempts and have these automatically reported. Also, it is difficult to monitor all system resource use to determine cases of overloads and malfunctions. Moreover, there is not enough memory capacity to store security audit data and meet the requirements for IEC 62443 compliance.
Addressing the challenges
The problems discussed above are not without solutions. It is possible to comply with IEC 62443 requirements by using a security and observability platform that facilitates the establishment and streamlining of processes to achieve a secure development lifecycle.
A deterministic security solution embedded in the IACS or IoT device can provide runtime protection to address known and yet-to-be-profiled threats. This expands security visibility into connected devices that are not capable of running their own security applications. An AI-driven observability platform can gather relevant information from IACS and IoT devices to centralize anomaly detection and deliver operational intelligence in real-time.
An effective observability platform designed with deterministic security functions compensates for the lack of on-device security and the difficulty to obtain information about device use, glitches, defects, and attacks. It empowers IACS and IoT device makers and maintenance teams to achieve IEC 62443-4-1 compliance and offer customers products that are secure and less likely to cause cybersecurity problems.
Screenless displays that provide 3-D images viewable from all directions continue to undergo development on multiple fronts. But can they find a market?
In the opening scene of the 2003 movie Paycheck, we learn that the protagonist, Michael Jennings, has been tasked with reverse engineering a screen-based 3-D display made by his client’s competitors. The client’s executives are unimpressed—until Jennings pulls the bezel away to reveal a free-standing 3-D image that no longer needs a screen. The chiefs rejoice: “And they said 100 percent market share was impossible!”
Volumetric display market size
The global volumetric display market is segmented by display type into swept volume and display volume; by component into projector, mirror, lens, memory and screen; by technology into digital light processing and liquid crystal Silicon; by end-user into education, healthcare, aerospace, advertisement and others. The global volumetric display market is anticipated to record a CAGR of 32% over the forecast period i.e. 2019-2027.
Unknown to the viewers, the key transformation at work in the scene—the one that made the display so much more compelling—was the transformation of a traditional 3-D image into a volumetric image. Traditional 3-D uses a screen of some sort to converge light to an optical real image point somewhere in front of the screen. In a volumetric display, the “screen” is, in a sense, scattered throughout the image volume itself: light diverges from scattering or emissive point primitives within the volume to form the image in physical space.
Volumetric images possess a physicality that allows them to occupy space, much like the physical object being depicted.
Thus, rather than converging from a limited aperture, light from a volumetric display may instead diverge over very large angles. In fact, by emitting light isotropically, a volumetric image point can be seen from all directions. By turning convergence into divergence at the modulation surface or surfaces of the display, a volumetric system turns traditional 3-D inside-out, to create screenless real images that place no limitations on the viewer’s position. The resulting images possess a unique physicality that allows them to occupy space, in a way very much like the physical object being depicted.
This article offers a look at the main types of volumetric displays, and some recent advances in this unusual 3-D visualization technology. It also explores some efforts at commercialization—and what advances might be necessary to bring these displays into the mainstream.
3-D display families
To understand volumetric displays, we need to place them in the context of the three families of 3-D displays: ray displays, wave displays and point displays. Both ray and wave displays use a screen as a modulating surface. Ray displays, which include lenticular, barrier-line and some coded-aperture systems, form real points made by intersecting rays in space; wave displays, which include holographic displays and nanophotonic phased arrays, form similar points by focusing a wave front. (Some would argue that these families lie at different places on the same spectrum. We would add that you can determine which side of this spectrum you are on by simply asking, “does diffraction work for me or against me as display elements get small?”)
Separate from these first two families is the third group, point displays, which do not converge light from a surface but instead diverge light from a point. This display family has only one member, the volumetric display. Indeed, the definitions of the point display and the volumetric display are essentially synonymous: the display’s scatterers or emitters are co-located with the actual image points.
The primary result of this co-location is that, in the ideal case, the image may be seen from almost any direction. There is no display aperture (screen), and there may be little or no viewing zone restriction. Co-location of the display emitters with the image points also means that the human eye also accommodates readily to the volumetric 3-D image.
Once a scattering surface is dislocated from the image point it forms (such as when light scatters from a remote screen), however, an aperture is immediately formed that places restrictions on the viewer, and the accommodation cues are now no longer perfect, as they are subject to the diffraction limit of the new aperture. Thus, once the co-location condition is violated, the principal benefits of volumetric displays—perfect accommodation, no view zone restriction—start to diminish. Indeed, Curtis Broadbent, a prominent volumetric-display designer, suggests that once co-location is violated, it’s a clue that we are no longer looking at a volumetric display. “The imposition of limitations on the viewer,” Broadbent says, “violates the spirit of volumetric displays.”
Advantages and disadvantages
The point, wave and ray display taxonomy allows the display designer to identify what type of display she is looking at, and what design challenges are likely to beset a given architecture. The co-location of perceived points with their true sources in volumetric displays in particular creates a powerful and practical discriminant, allowing one to group displays that have similar affordances (that is, similar baseline properties that determine how the viewer can interact with the display) and to evaluate borderline cases. Four affordances in particular highlight the advantages and disadvantages of volumetric displays relative to ray and wave displays.
Human eyes accommodate to volumetric image points just as they do to actual material objects, because volumetric image points are material objects—at least for a brief moment.
Accommodation. Human eyes accommodate to volumetric image points just as they do to actual material objects, because volumetric image points are material objects—at least for a brief moment. However, ray and wave displays form optical real image points by the convergence of light. The quality of that point, or point spread function, depends strongly on the size and quality of the aperture that supports it. Is it coherent? Does it present a large numerical aperture? To match the accommodation of a volumetric point, a ray or wave display would have to completely surround the point, converging from all directions to form the image. Only then could the display aperture be prevented from degrading the accommodative effect.
View angle. The supremacy of volumetric displays also shows in their large view angle, which generally comes “for free” in volumetric displays. Wide view angle in ray displays and especially in holographic wave displays, in contrast, comes at the price of tremendous hardware and computational complexity.
Occlusion. On the other side of the ledger, occlusion—the ability of one object in a 3-D scene to partly obscure another—presents a considerable challenge for point/volumetric images. In general, the image point primitive wants to emit isotropically, but to create images with self-occlusion, it must be possible to turn off the point’s emission in some directions. In ray and wave displays, achieving occlusion is a much simpler matter that generally boils down to careful content creation.
Virtual-image formation. A virtual image can be thought of as a window into another world, which may have no mapping on reality, and it likewise presents challenges for volumetric displays. If a display is hanging on a solid brick wall, but the 3-D image shows an open landscape in the background, it may be necessary to create wave fronts or rays that back-cast to points that cannot exist in real space. Given the requirement that volumetric displays have physical scatterers or emitters co-located with image points, virtual images would seem to be fundamentally impossible for volumetric displays.
An array of tiny emitters that acts like a phased array, or even like Huygens sources, might be made to create virtual image points. But such a display would create an aperture (the array boundaries) that would limit the viewable angles of the virtual image point. It would cease to be a volumetric display and instead become a phased-array wave display formed with volumetric hardware. It would thus inherit the affordances, and challenges, of the wave display family at the expense of the advantages of the volumetric-display family.
(Sometimes that tradeoff is desirable. For example, in the late 2000s, Oliver Cossairt and colleagues converted a volumetric display into a multiview ray display, trading away co-location to obtain occlusion cues.)
Volumetric displays encompass three distinct approaches. Swept-volume displays commonly use rotating emissive or reflective screens, including illuminated spinning paddles, spinning LEDs or translating projection surfaces. As an example, the Peritron display uses a phosphor-coated paddle that spins inside a glass chamber under vacuum. An electron beam hitting the paddle creates a point emitting visible light. Steering the electron beam and spinning the paddle creates a volumetric image from the emissive points.
Static-volume displays might form images by upconversion in nonlinear gases or solids or by projecting onto a number of diffusing planes. The Rochester Illumyn, for instance, is a glass chamber filled with heated cesium vapor. A 3-D position within that gas is illuminated with two beams at wavelengths (such as infrared) not visible to the human eye. The two wavelengths combine in the nonlinear material to form visible light that scatters from that position to form an emissive image point; scanning the two beams creates a volumetric image.
A third, relatively young category, free-space displays, operate in air, with no barrier between user and image; these can include free-particle, trapped-particle and plasma emission displays. The first free-space display known to the authors is Ken Perlin’s “Holodust” concept, in which ubiquitous dust motes are identified and then immediately illuminated with a laser to build an image in space. Later, the University of Keio demonstrated a display in which a powerful, pulsed IR laser is focused in air to create a plasma. Scanning the focus through the air draws an image composed of plasma dots. This process has been refined to use femtosecond pulses and a spatial light modulator to focus to multiple points simultaneously. Several displays use heat or fog to modify air so that it can scatter or modulate light.
This year saw the introduction, at Brigham Young University, USA, of another free-space display, the optical-trap display (OTD). An OTD operates by first confining a light-scattering particle in a nearly invisible optical trap. The trap is moved through space, dragging the trapped particle with it. The trapped particle is then illuminated with visible lasers to draw a 3-D image by persistence of vision. The prototype scans particles at roughly 1 to 2 m/s to form very small (1 cm3) video-rate images. These small images can be full-color and possess image definition up to 1600 dpi. Researchers hope to greatly increase the size of images in future prototypes by using multiple particles simultaneously.
In addition to the examples above, the volumetric-display scene includes several borderline cases, which often use volumetric hardware to produce a ray display (or vice versa). For example, the Texas DMD display, commonly called a holographic display, is perhaps better classified as a volumetric display. That’s because the focus of the holographic wave fronts from the DMD focus inside a diffusing liquid, which provides a scattering medium that enlarges the view zone of the display—and in so doing trades away the ability to occlude points. Holographic hardware thus creates a volumetric display, and thereby adopts the advantages and disadvantages of its new display family.
Another borderline example, the Sony Raymodeler, uses a spinning array of LEDs and thus appears superficially similar to swept-volume displays. However, these LEDs are not used as point primitives; instead the array projects a large number of views as a ray display. As such, the display can easily achieve occlusion and can create virtual images, but lacks the perfect accommodation of a volumetric display.
Efforts at commercialization
Despite the bullish forecast of the executives in Paycheck, volumetric displays haven’t exactly captured 100 percent of the 3-D display market. There have, however, been a number of commercial efforts. Two case studies hold particular interest: Actuality Systems’ Perspecta Display, a 10-cm-diameter swept-volume display, and the LightSpace DepthCube, a stacked-LCD static-volume display. Despite the displays’ physical differences, the teams behind them came to similar conclusions at the end of years-long commercialization efforts.
Gregg Favalora, the Harvard-educated founder and CTO at Actuality Systems, made his first attempt at a volumetric display in 1988 as a ninth-grader. He would later make volumetric images because he felt that a “floating 3-D image would be visually impressive, and in 1997-2000 seemed so feasible” owing to emerging technologies. Favalora noted the availability of Texas Instruments micromirrors and computational resources to do rendering. He had also identified a way (an aspect of which had been proposed in the 1950s) to project a sharp image onto a spinning disk.
Encouraged by money won in an MIT entrepreneurship competition to build a company, Favalora founded and raised seed funding for Actuality Systems at the turn of the 21st century. Its flagship display, the Perspecta, was capable of images with remarkably high resolution. Perspecta could generate a 100-million-voxel image with off-the-shelf—albeit expensive—parts. The display was marketed to a wide variety of potential customers as a tool for user-interface research, structure-based pharmaceutical design and petroleum exploration, and was assessed in medical visualization. The technology’s high price point, however, constrained the customer base, and Actuality Systems’ assets, such as its valuable patent portfolio, were acquired by Optics for Hire in 2009.
During the same period, on the other side of the country, Alan Sullivan was building a 100-TW laser at Lawrence Livermore National Laboratory. Looking for a new opportunity, he came upon a startup that included, in his words, an “empty room, a sketch on a napkin and more or less a blank check” to develop 3-D displays. Sullivan jumped on board. The following years brought reorganizations and promotions, and Sullivan, now CTO, had shepherded the start-up’s static-volume prototype to a pre-commercial state. Now in 2003, all they needed was a market.
Unfortunately, the search for a market outlived two companies, the second of which, LightSpace Technologies, Sullivan founded himself. Despite the display’s high price—more than US$10,000—there were a number of interested parties. But all made demands that the display could not meet. There was interest from the medical field, but the display needed to be entirely free of artifacts. Slot machine manufacturers loved it, but they needed it to be extremely inexpensive (say, US$50 per unit). The oil industry was keen, but it needed a much larger display for large-group collaborative decision-making. After years of searching, Sullivan thought he might have found a niche market in interventional radiology, but it was deemed too small a market by his investors.
By 2007, Sullivan had reached a state he describes as “total exhaustion” and left LightSpace. Before leaving, he submitted a 200-page document full of suggestions for improvements to the display. The recommendations reportedly all turned out to be good ideas, and recently the LightSpace DepthCube has resurfaced with an improved display.
The similarity between the Actuality and LightSpace commercialization efforts seems to be that, despite excellent technology, success appears to require a dramatically reduced price point, greater size or still greater image quality. It will be interesting to see if the new LightSpace display and the new swept-volume Voxon Photonics VX1 can lower cost and increase affordances enough to gain a market foothold. Also of interest will be the rise of the Rochester rubidium-cesium excited-gas display, which might achieve display diameters of more than a meter, according to one of its inventors, Curtis Broadbent. Free-space displays have also made forays into the commercial sphere, including Burton Aerial in 2011 and a Kickstarter effort launched in 2016 by Jaime-Ruiz Avila.
Killer app wanted for Volumetric Displays
These early commercialization experiences, and an assessment of the features of current and future volumetric displays, prompt the question: What is the “killer app” for volumetric displays? Does there exist an application that only a volumetric display can adequately accomplish? Or could every potential application be done, and done more cheaply, with another display—such as, for example, a head-mounted display?
Notwithstanding the current efforts of AR/VR juggernauts, we believe that the answer to this question is no in at least some cases: When one wants to look someone else in the eye who is remotely located; when you can’t reasonably put glasses on your intended viewer (such as an enemy combatant, or everyone who might pass by your storefront); or when one set of headwear might conflict with another headset used in medical or military applications.
In these scenarios, the materiality of volumetric displays—their presence in space, and the freedom from restrictions on the viewer’s location—makes them an ideal choice. The case for these displays is also strengthened if the imagery is sparse, viewed at interactive distances, or created in concert with other technologies, like holography, with complementary affordances.
The 3-D displays most typically imagined in our popular depictions of the future, in books and in films such as Star Wars and Paycheck, tend to most resemble free-space volumetric displays—in particular, OTDs. These displays have the potential to provide both excellent color and fine detail. However, it is too early to say if this technology will provide a feasible platform for 3-D display, as OTDs still have some significant technical challenges to surmount.
If angular control is achieved, then viewer-customized imagery should not be too far behind.
First, the OTD demonstrations thus far have involved trapping, illuminating and scanning a single particle, and it remains to be shown that several particles can be trapped and illuminated simultaneously in a reliable and robust fashion. If this can be accomplished, however, it’s interesting to envision the new possibilities that a colorful, detailed free-space platform might provide. For example, one might be able to obtain large autostereoscopic 3-D images from small devices—mobile-technology analogs of Princess Leia’s image in Star Wars.
If OTDs could be made to scatter selectively in preferred directions (an even greater challenge than multiple-particle manipulation), it might even be possible to see the first free-space images with self-occlusion. The same directional control could also be used to create an effect that hasn’t previously been much discussed, even in science fiction—that of a viewer-dependent physical image. That is, one could project a volumetric image that would be customized for each individual viewer. If angular control is achieved, then viewer-customized imagery should not be too far behind.
In an era of renewed interest and new possibilities for volumetric displays, it is more important than ever to understand and appreciate their unique place among 3-D technologies—and the technological and commercial breakthroughs that could come in the near future.
How does it compare to Virtual Reality and Augmented Reality?
The biggest difference is the fact that you don’t need to wear a headset. Our technology also enables a unique shared social experience, where people gather around and interact with genuine face to face communication.
References and Resources
- B. Blundell and A. Schwartz. Volumetric Three-Dimensional Display Systems (Wiley-IEEE Press, 1999).
- H. Maeda et al. “All-around display for video avatar in real world,” Proc. 2nd IEEE/ACM Intl. Symp. Mixed Augm. Real. (IEEE Computer Society, 2003).
- O.S. Cossairt et al. “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244 (2007).
- T. Yendo et al. “The Seelinder: Cylindrical 3-D display viewable from 360 degrees,” J. Vis. Commun. Img. Rep. 21, 586 (2010).
- D.E. Smalley et al. “A photophoretic-trap volumetric display,” Nature 553, 486 (2018).
What is Optical-trap displays? Optical trap displays (OTD) are an emerging display technology with the ability to create full-color images in air.