Immerse in the world of AR and discover its different types.
While thinking of augmented reality, probably most people associate it with games and moving photographs or virtual 3D objects placed around the real world. But the truth is AR is not only about entertainment. In fact, it proves very useful and beneficial to almost any business nowadays. In this article, we’re going to give an overview of different types of augmented reality systems.
The first of the approaches we’re going to present here is marker-based AR that makes heavy use of image recognition. It provides information about a given physical object to create digital visualizations.
Marker-based AR ties AR content to particular forms of a visual marker that can be found in reality. The application captures the footage using a device camera and analyzes the QR code, image, or a pattern presented from above and displays information about it on the screen. The rendered recognition of the object is replaced with its 3D version to gather relevant information. This way, the user can observe the object in detailed variation from various viewing angles on the output hardware. As a result, the virtual experience is extremely precise and tailored to a specific orientation and location.
Marker-based apps are most often chosen for their precision. In other words, when human analysis of the situation would be too time-consuming or if the information gathered from a specific setting is more complex than general knowledge. For instance, a control panel scan with hundreds of buttons will analyze the settings and identify problems much faster than a human.
Software for AR tracking or recognition is quite challenging to build, as developers try to reduce scripting time with the use of simple markers. QR codes are very common as basic designs and shapes (such as large letters, symbols, geometric figures, etc). Therefore, developers add these markers into the app’s code and build scripts so that users’ devices can identify them under various lighting conditions.
Typically, the software is constructed in such a way that when it recognizes the likely marker features (unique details), it captures the frame and processes it to establish whether the image actually matches the marker. If everything fits, augmentation takes place and the virtual object is visualized on top of the actual footage.
Another approach is markerless AR that enables any part of the physical environment to be used as the basis for placing 3D content. This comes in handy especially when we want to see how a given object looks in different settings and environments. Markerless AR uses sensors such as accelerometers, digital compasses, and thermal positioning sensors to read data from any physical location while predicting the area the user is moving through. The obtained data can be displayed on any output device, adding information about a specific item that can be seen with the capturing equipment.
Markerless AR apps don’t rely on any particular image for their digital content to be displayed, so they can be used in almost any location and setting. But this doesn’t make them oversimplified, as even markerless augmented reality can have complex and very useful algorithms built in that display content in a certain way.
This method is extremely useful when we expect people to use a given app in different situations and areas, as the app can be tailored to adapt to the users’ location based on the measurements and information it collects.
Markerless AR can be divided into location-based AR, projection-based AR, superimposition AR, and outlining AR.
Essentially, location-based AR anchors augmented reality content to a particular location. For example, a navigation guide tied to a specific street is a useful application of this.
Location-based AR provides customized digital content based on the user’s location, but its analysis, unlike the marker-based AR system, doesn’t require any special markers or physical target to perform AR rendering and to identify where to place a virtual object in the user’s environment. Instead, this type of AR uses GPS data along with a digital compass to determine the location and position of the user’s device. Often, the user’s camera isn’t even used for augmented reality navigation, but the app gathers data from their GPS module, compass, gyroscope, and accelerometer to establish what content should be displayed.
Location-based AR proves very useful in applications related to sport, tourism, and travel. They mostly use city/road maps data but can also be helpful in areas such as mountains, deserts, and forests, among others. Even in such secluded sites, with no previous camera footage available, data about the terrain, elevation, and wildlife can be used to provide users with an augmented reality map of their surroundings.
Many developers favor location-based AR because there’s no need for complex image recognition functionality or specific markers to be programmed in the app. Instead, the app simply collects a set of data from the device, such as where the user is, which angle/height the phone is pointed at, etc., and combines it with open-source data for an immersive user experience.
Projection-based AR is an exceptional form of immersive technology that requires no smartphone or gadget. Actually, it doesn’t call for any kind of screen. Instead, one or more projection devices create a 3D model using light. Most often, the 3D model is static, but it can surely be animated as well. This approach is basically what we think of when we visualize a hologram.
Since few consumers have the equipment needed to create such light models, projection-based AR is mostly used by businesses – for AR training, presentations, and development purposes.
Projected 3D models are not created through apps, so, to present them, you only need design/spatial data and special projectors. The data for the 3D model can be gathered by scanning or photographing a physical object or through custom design. Apart from that, you will spend a lot of time calibrating the settings of the projectors to ensure that the size, color, depth, edges and other visual aspects of the model are accurate.
Superimposition AR recognizes an item in the physical environment and partially or completely replaces the original view of the object with an updated, augmented view of it for the human eye. This method provides many views of a target object with the option to show additional relevant information about it.
In addition, superimposition AR uses object recognition to replace the whole object or a part of it with an augmented view. Take first-person shooter games for example, where a character can advance their military equipment showing night view, infrared view, among many others. Superimposition AR can also be used in medicine to superimpose an X-ray view of a patient’s broken bone on the actual image to provide a better understanding what the bone damage is. Or, it is also used in something as trivial as social platforms photo filters.
Outlining AR relies on special cameras that are built for human eyes to outline specific objects, such as lines and boundaries, which helps in certain situations. Outlining AR uses object recognition to better understand the current environment. It’s used, in particular, in car navigation systems for safe driving at dusk. It can recognize the road boundaries and outline them for the driver. This approach can also find use in engineering and architecture to outline buildings along with their supporting pillars.
Now you have some grasp on the different types of AR systems and how each of them works. So how about using one of them with your Leo Rover? Check out our integration on how to track AR Tags with Alvar.
See what software platforms we’ve listed as the coolest and the most promising ones that come in handy in robotics.
Explore different exciting applications of the language model in robotics.