Spatial computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Tag: Reverted
Line 1: Line 1:
{{Short description|Extended reality and related technologies}}
{{Short description|Extended reality and related technologies}}
[[File:Spatial Tech.webp|alt=Spatial Computing Experience|thumb|Spatial Computing Experience]]
[[File:Spatial Tech.webp|alt=Spatial Computing Experience|thumb|Spatial Computing Experience]]
'''Spatial computing''' is any of various [[human–computer interaction]] techniques that are perceived by users as taking place in the real world, in and around their natural bodies and physical environments, instead of constrained to and perceptually behind computer screens. This concept inverts the long-standing practice of teaching people to interact with computers in [[digital environments]], and instead teaches computers to better understand and interact with people more naturally in the human world. This concept overlaps with others including [[extended reality]], [[augmented reality]], [[mixed reality]], [[natural user interface]], [[contextual computing]], [[affective computing]], and [[ubiquitous computing]]. The usage for labeling and discussing these adjacent technologies is imprecise.<ref name="Ovide-2024-02-02">{{cite news |date=2024-02-02 |last=Ovide |first=Shira |title=Apple's Vision Pro is 'spatial computing.' Nobody knows what it means |newspaper=Washington Post |url=https://www.washingtonpost.com/technology/2024/02/02/apple-vision-pro-what-is-spatial-computing-ar-vr/ |accessdate=2024-02-02 |quote=Apple insists that its $3,500 Vision Pro ski goggles, which officially debuted Friday, is not virtual reality but “'''spatial computing'''.” One problem: No one agrees on the definition of '''spatial computing'''.}}</ref>
'''Spatial computing''' is any of various [[human–computer interaction]] techniques that are perceived by users as taking place in the real world, in and around their natural bodies and physical environments, instead of constrained to and perceptually behind computer screens. This concept inverts the long-standing practice of teaching people to interact with computers in [[digital environments]], and instead teaches computers to better understand and interact with people more naturally in the human world. This concept overlaps with others including [[extended reality]], [[augmented reality]], [[mixed reality]], [[natural user interface]], [[contextual computing]], [[affective computing]], and [[ubiquitous computing]]. The usage for labeling and discussing these adjacent technologies is imprecise.<ref name="Ovide-2024-02-02">{{cite news |date=2024-02-02 |last=Ovide |first=Shira |title=Apple's Vision Pro is 'spatial computing.' Nobody knows what it means |newspaper=Washington Post |url=https://www.washingtonpost.com/technology/2024/02/02/apple-vision-pro-what-is-spatial-computing-ar-vr/ |accessdate=2024-02-02}}</ref>


Spatial computers typically include sensors—such as [[RGB color model|RGB]] cameras, [[Depth camera|depth cameras]], [[3D trackers]], [[inertial measurement unit]]s, or other tools—to sense and track nearby human bodies (including hands, arms, eyes, legs, mouths) during ordinary interactions with people and computers in a 3D space. They further use [[computer vision]] ([[artificial intelligence|AI]] / [[machine learning|ML]]) to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more. Quite often they also use [[extended reality|XR]] and [[mixed reality|MR]] to superimpose virtual 3D graphics and virtual 3D audio onto the human visual and auditory system as a way of providing information more naturally and contextually than traditional 2D screens.
Spatial computers typically include sensors—such as [[RGB color model|RGB]] cameras, [[Depth camera|depth cameras]], [[3D trackers]], [[inertial measurement unit]]s, or other tools—to sense and track nearby human bodies (including hands, arms, eyes, legs, mouths) during ordinary interactions with people and computers in a 3D space. They further use [[computer vision]] ([[artificial intelligence|AI]] / [[machine learning|ML]]) to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more. Quite often they also use [[extended reality|XR]] and [[mixed reality|MR]] to superimpose virtual 3D graphics and virtual 3D audio onto the human visual and auditory system as a way of providing information more naturally and contextually than traditional 2D screens.

Revision as of 03:46, 16 March 2024

Spatial Computing Experience
Spatial Computing Experience

Spatial computing is any of various human–computer interaction techniques that are perceived by users as taking place in the real world, in and around their natural bodies and physical environments, instead of constrained to and perceptually behind computer screens. This concept inverts the long-standing practice of teaching people to interact with computers in digital environments, and instead teaches computers to better understand and interact with people more naturally in the human world. This concept overlaps with others including extended reality, augmented reality, mixed reality, natural user interface, contextual computing, affective computing, and ubiquitous computing. The usage for labeling and discussing these adjacent technologies is imprecise.[1]

Spatial computers typically include sensors—such as RGB cameras, depth cameras, 3D trackers, inertial measurement units, or other tools—to sense and track nearby human bodies (including hands, arms, eyes, legs, mouths) during ordinary interactions with people and computers in a 3D space. They further use computer vision (AI / ML) to attempt to understand real world scenes, such as rooms, streets or stores, to read labels, to recognize objects, create 3D maps, and more. Quite often they also use XR and MR to superimpose virtual 3D graphics and virtual 3D audio onto the human visual and auditory system as a way of providing information more naturally and contextually than traditional 2D screens.

Spatial computing does not technically require any visual output. For example, an advanced pair of headphones, using an inertial measurement unit and other contextual cues could qualify as spatial computing, if the device made contextual audio information available spatially, as if the sounds consistently existed in the space around the headphones' wearer. Smaller internet of things devices, like a robot floor cleaner, would be unlikely to be referred to as a spatial computing device because it lacks the more advanced human-computer interactions described above.

Spatial computing often refers to personal computing devices like headsets and headphones, but other human-computer interactions that leverage real-time spatial positioning for displays, like projection mapping or cave automatic virtual environment displays, can also be considered spatial computing if they leverage human-computer input for the participants.

History

The term apparently originated in the field of GIS around 1985[2] or earlier to describe computations on large-scale geospatial information. This is somewhat related to the modern use, but on the scale of continents, cities, and neighborhoods.[3] Modern spatial computing is more centered on the human scale of interaction, around the size of a living room or smaller. But it is not limited to that scale in the aggregate.

In the early 1990s, as field of Virtual reality was beginning to be commercialized beyond academic and military labs, a startup called Worldesign in Seattle used the term Spatial Computing[4] to describe the interaction between individual people and 3D spaces, operating more at the human end of the scale than previous GIS examples may have contemplated. The company built a CAVE-like environment it called the Virtual Environment Theater, whose 3D experience was of a virtual flyover of the Giza Plateau, circa 3000 BC. Robert Jacobson, CEO of Worldesign, attributes the origins of the term to experiments at the Human Interface Technology Lab, at the University of Washington, under the direction of Thomas A. Furness III. Jacobson was a co-founder of that lab before spinning off this early VR startup.

In 1997, an academic publication by T. Caelli, Peng Lam, and H. Bunke called "Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies" introduced the term more broadly for academic audiences.[5]

The specific term "spatial computing" was later referenced again in 2003 by Simon Greenwold,[6] as "human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces". MIT Media Lab alumnus John Underkoffler gave a TED talk in 2010[7] giving a live demo of the multi-screen, multi-user spatial computing systems being developed by Oblong Industries, which sought to bring to life the futuristic interfaces conceptualized by Underkoffler in the films Minority Report and Iron Man.

Products

Apple announced Apple Vision Pro, a device it markets as a “spatial computer” on June 5, 2023. It features several features such as Spatial Audio, two 4K micro-OLED displays, the Apple R1 chip and eye tracking, and released in the United States on February 2, 2024.[8] In announcing the platform, Apple invoked its history of popularizing 2D graphical user interfaces that supplanted prior human-computer interface mechanisms such as the command line. Apple suggests the introduction of spatial computing as a new category of interactive device, on the same level of importance as the introduction of the 2D GUI.

Magic Leap had also previously used the term “spatial computing” to describe its own devices, starting with the Magic Leap 1. Their use seems consistent with Apple's, although the company did not continue using the term in the long term.[9] Meta Platforms famously used the term “metaverse” to describe its future plans. The key difference between that and spatial computing is that the latter does not imply the vast inter-networking of sites into a common fabric, akin to a 3D version of the internet. Spatial computing can be conducted by one person, two people, or a small number of people with their local computer systems without regard to the greater network of other users and spaces. As such, it may serve to minimize (but not completely eliminate) many of the negative aspects found in large-scale on-line social interactions such as Griefing and other harassment by adding social accountability and better social perception in known groups.[original research?]

Understanding Key Ideas

  • 3D Thinking: Spatial computing involves visualizing and manipulating information within three-dimensional space. Instead of working with flat files and interfaces, users can interact with digital models, simulations, and virtual environments that feel almost tangible.
  • The Environment as Interface: Cameras, sensors, and advanced software allow spatial computing systems to understand the real world. Users' movements, the layout of a room, and even the objects around them can be incorporated into the digital experience.
  • Breaking Input Boundaries: Keyboards and mice give way to natural gestures, voice commands, eye-tracking, and controllers that sense their position within space.

Technologies That Make It Possible

Spatial computing isn't about a single device, but a convergence of different technologies:

  • Augmented Reality (AR): Overlays digital content on the real world through devices like smartphones or specialized headsets.[10]
  • Virtual Reality (VR): Creates fully immersive digital environments, often through headsets.
  • Mixed Reality (MR): A more advanced blending of AR and VR where digital objects interact realistically with the physical environment.
  • Computer Vision and Sensors: Enable devices to 'see' and understand the world around them.
  • Artificial Intelligence (AI): Helps process spatial data, recognize objects, and make real-time adaptations in a spatial computing experience.
Future Use Of Spatial Technology
Future Use Of Spatial Technology

Where Spatial Computing Is Making a Difference

This field is still evolving, but its potential is vast:

  • Industry: Design, manufacturing, and training are transformed with 3D models and spatial instructions.
  • Retail: Interactive product experiences and location-based guidance enhance shopping
  • Medicine: Spatial models improve surgical planning, and rehabilitation can become more engaging.
  • Architecture: Virtual tours of buildings, and real-time on-site collaboration become possible.
  • Education: Immersive environments provide new ways to learn through simulations and exploration.
  • Entertainment: Gaming grows more interactive, and storytelling can take on an entirely new dimension.

Challenges and Future Directions

  • Technical Limitations: Improvements in hardware performance, power efficiency, and sensor technology are needed for wider adoption and unhindered user experiences.
  • Hardware Needs: More powerful, lightweight, and less expensive devices are key for widespread adoption.
  • User Experience (UX): Designing intuitive and seamless spatial computing experiences remains a challenge.
  • Accessibility: Ensuring that people with disabilities can fully benefit from spatial computing technologies.
  • Privacy and Security: Addressing concerns related to data collection and potential misuse of information about a user and their surroundings.


Spatial computing is poised to change how we work, learn, and connect. As the technology matures, expect to see it reshape industries and bring even more seamless interactions between ourselves and the digital world.

References

  1. ^ Ovide, Shira (2024-02-02). "Apple's Vision Pro is 'spatial computing.' Nobody knows what it means". Washington Post. Retrieved 2024-02-02.
  2. ^ Reeve, D. E. (April 1985). "Computing in the geography degree: limitations and objectives". Journal of Geography in Higher Education. 9 (1): 37–44. doi:10.1080/03098268508708923. ISSN 0309-8265.
  3. ^ "Towards intelligent spatial computing for the Earth sciences in South Africa". 1993.
  4. ^ Jacobson, Bar-Zeev, Wong, Dagit (1993). "The Virtual Environment Theater using Spatial Computing". RealityPrime.
  5. ^ Caelli, Terry; Bunke, Horst (1997). Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies. World Scientific. ISBN 978-981-02-2924-5.
  6. ^ Greenwold, Simon (June 2003). "Spatial Computing" (PDF). MIT Graduate Thesis. Retrieved 22 December 2019.
  7. ^ Underkoffler, John (2010). "Pointing to the Future of UI". TED Conference.
  8. ^ "Apple Vision Pro". Apple. Apple Inc. Retrieved 5 June 2023.
  9. ^ "Magic Leap". Magic Leap. Magic Leap Inc. Retrieved 9 Feb 2024.
  10. ^ Billinghurst, Mark; Kato, Hirokazu (2002-07-01). "Collaborative augmented reality". Communications of the ACM. 45 (7): 64–70. doi:10.1145/514236.514265. ISSN 0001-0782.