A 3-dimensional scene.
How much light from ambient lights is reflected.
A node that is connected to all lower-level nodes.
3D computer animation that combines 3D models of objects and programmed or hand keyframed movement.
The Animation Sequencer is a keyframe-based animation editor that permits you to create and edit animations. It is a tabbed window that consists of media controls, a properties panel, track list, timeline, and track view.
Add notes to a text or diagram, giving explanation or comment.
Applications these are platforms, software and tools that allow you to do a certain thing easily.
Typically Artifact is used internally when refering to managing data in the server systems.
Data results from applying a project to one or more data sources that can be visualised or utilised.
Aspect ratio refers to the ratio of vertical lines of pixels to horizontal lines of pixels on a screen, i.e., the width of the screen as compared to the height of the screen, usually presented in form width:height.
A runtime unit consisting of types and other resources.
Any digital file that is consumed within Visionary Render. Assets can include materials, models, particles, texture, audio, movies etc.
Augmented reality is a technology somewhat similar to virtual reality, but with a few key differences. Instead of trying to create an entirely separate world within the confines of VR gear and using it to replace the real world, it simply overlays visual or audio information over the real world as seen through the user's eyes. It presents information relevant to what the user is seeing at any given time, or filters out other objects, as per the user's needs. Although AR, like mixed reality technology, modifies the world in the user's eyes, unlike in mixed reality, AR modifications are purely informative and are neither anchored to nor do they interact with the real world.
A virtual representation of the experiencer within the virtual world.
The actual speed at which data is being transferred to and from your machine.
A cave automatic virtual environment or CAVE uses projections on the walls and ceiling of a room to create the illusion of a real environment. A viewer can move around anywhere inside the cave, giving them the illusion of immersion. However, it is not possible to directly interact with the environment, since it consists only of projections and leaves the viewer feeling somewhat disconnected from their surroundings.
Allows you to easily do movement constrained by collisions, without having to deal with a rigid body.
A technique that removes selected colour hues from a video or image.
Detection that virtual objects have intersected, sometimes triggering haptic or visual feedback for the experiencer.
A system unique to Virtalis Reach but similar to the commenting features of packages such as Confluence and MS Word which enable users to collaborate and communicate asynchronously on visualisations by maintaining threads of conversation.
A connection to a source of many items of External Data that are to be visualised.
Degrees of freedom or DOF refers to the different degrees of movement available to an object inside a space. There are six types of movement that can be further divided into translation (straight line movement in a specific direction) and rotation (a movement about the x-, y-, or z-axis) move sets. For instance, hitting a baseball with a baseball bat is not a single movement, but a complex combination of rotations and translations performed at the same time. An object can freely translate along each of the three perpendicular axes. These movements constitute the first three degrees of freedom: surge (forward and backward motion), heave (upward and downward motion), and sway (leftward and rightward motion). An object can also simultaneously rotate along the three axes. These movements constitute the other three degrees of freedom: roll (tilting from side to side), pitch (tilting forwards and backwards), and yaw (tilting left and right). Together these add up to six degrees of freedom or 6DOF and can describe every possible movement of an object.
A distinctive symptom or characteristic.
The distance between measuring points.
GPU multicast - render both eyes in parallel on two GPU's.
Dictates what colour is emitted by the surface and how brightly that colour is emitted.
Node types to ignore and whether to override cull settings.
Any item within a data source that can be translated to a Virtalis Model. Such as CAD components and Product Trees.
Eye tracking is a process used in headsets to measure and keep track of the direction of the user's gaze. Using this information, it is possible to reproduce the eyes natural process of bringing objects into/out of focus depending on what the user is concentrated on. Doing so enhances the feeling of immersion greatly, as simulating normal eye processes makes the users VR experience much more realistic and therefore less likely to break immersion.
The field of view is the total number of degrees visible at any given moment from a given point of view. Most people's field of view is approximately 200.
Frames per second.
Applies a fast, approximate anti-aliasing technique to reduce jagged edges.
Reset the settings to their default value when the application was installed.
Frame rates are the frequency at which an image/frame on a monitor is replaced by another. Each frame represents a still image to replace the previous image with, giving off the illusion of change/movement on a monitor. Generally, the two most common frame rates are 30 fps and 60fps, meaning 30 frames per second and 60 frames per second, respectively. The lower a frame rate, the fewer images are used to bridge the gap between a previous scene shown and the next one, meaning that lower frame rates imply more changes/movement between images and thus jerkier or choppier movement. In contrast, high frame rates create a feeling of smoothness, as they have the benefit of using more images with progressively smaller changes for each second of content.
A graphics-based operating system interface that uses icons, menus and a mouse (to click on the icon or pull down the menus) to manage interaction with the system.
A collection.
The forms of objects which have boundary lines, angles and surfaces.
The Heads-Up-Display is a way of showing data to the user without forcing them to look away from their current position, improving the user's ability to view and identify relevant information and lowering the time it takes to do so.
Haptics are a way of providing feedback to the user for actions taken in virtual reality environments, physically simulating the expected results of the user's movements, similar to vibration effects on controllers. When the user tries to grab or touch something in the VR setting, gloves or other gear worn by the user can simulate the pressure to the corresponding part of the user's body and make it feel like the user is touching a virtual object.
A head mounted display or HMD refers to a VR headset, basically a set of lenses combined with either an inbuilt display or attached smartphone in the form of a helmet or goggles that can be strapped around your head. Some contain a variety of sensors that can track the movement of the head.
Head tracking is a process that monitors the current position and orientation of the user's head. This is extremely important in VR as it allows the virtual point of view to follow around the user's point of view, so the user can turn their head and see different angles of the same scene within the VR environment.
Light that points in the view direction of the camera.
A heatmap is an analytical tool used to show what a user is looking at within a VR experience, graphical interface etc., It uses a system of color-coding, usually ranging from red (hot) to blue or green (cold), to create a graphical representation of the focus of the user's attention.
Manipulators that are rendered in the 3D scene to enable one to tweak the emission shape of the particle system.
Reproduce a higher dynamic range of luminosity. This typically makes the scene appear brighter.
Distance between the centre of the pupils in your eyes.
Immersion is the viewer's sense of being part of a virtual environment. It is achieved when sound, design, atmosphere, visualisation, etc. are able to create a sense of actually being in the virtual world.
Bringing in information from a file into a program.
Put refers to the method of control you will use for virtual reality. This could be a mouse and keyboard, a gamepad, or even motion-tracking.
The creation of new values that lie between known values.
Progressive rendering system designed to generate photorealistic images.
Functionality wherein users can identify an assembly/sub-assembly/component and examine or deal with it separately.
Judder is a significant shaking of the visual content within the Head Mounted Display.
The absolute position of the last mouse click.
Latency in virtual reality refers to a delay between user input (e.g., head, hand, or leg movements) and output (e.g., visual, haptic, positional, audio) caused by a mixture of technical problems. High-latency can lead to a detached experience and can also contribute to motion sickness / dizziness.
Locomotion refers to the means by which the user is able to move around within a VR environment. Most systems use some combination of three different types of locomotion: teleportation, transportation, and perambulation. Teleportation allows the user to point and click on a location to teleport there or select from a predefined list of locations to travel to, giving them a certain freedom of movement but no option for movement in between locations. Transportation makes the user a passenger in a vehicle or on an animal that moves along a predefined path, allowing them to move their head or hands, but making them unable to move away from their mode of transportation in any way beyond choosing a different object to follow. Perambulation uses handheld controllers, the HMD, or room-tracking to track the user's movements and give them the ability to move as they would in the real world.
Lua is a lightweight, high-level, multi-paradigm programming language designed primarily for embedded use in applications. Lua is cross-platform, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relatively simple C API to embed it into applications.
Provides control over how the textures are mapped into a Model.
Used to describe the surface appearance of a Model.
Combine or cause to combine to form a single entity.
A set of data that describes and gives information about other data.
Mixed reality technology overlays artificial content onto the real world and enables the artificial content to interact with the real world scenery. Additionally, mixed reality allows overlaid content to be interacted with in real time, as it stays continually updated for interactivity.
The practice of transporting and exchanging data between nodes over a shared medium in an information system.
Devide or data point in a larger network.
The obscuring or hiding an object from view by the positioning of other objects in the experiencer's line of sight.
Dictates how opaque the surface is.
A subclass of of data source whose External Data are Components (CAD) and Product Trees (CAD) under Product Lifecycle Management.
The point of view or POV is the reference point from which observations, calculations, and measurements take place; the location or position of the viewer/object in question.
3D widget that enables you to change the position, rotation and scale of the selected object.
Rotation around the horizontal (x) axis.
The central point, pin, or shaft on which a mechanism turns or oscillates.
Software that adds new functions to a host program without altering the host program itself.
The ability to track where you are in a physical space e.g. moving around a room.
Positional audio is an audio technique that ties sounds to specific sources within an environment, realistically simulating the things the listener would hear from their point of view. This means that sounds will always come from the expected position relative to the listener.
The number of significant figures to show for each measurement.
A object containing an ordered list of templates and references to VR Models, which may be either specific or a based on a pattern.
Settings of an object on a computer.
Designed to make one animate non-transform values.
Standard OpenGL stereo mode.
The percentage of light that is reflected straight back at the light when the surface is directly facing the light, and inversely how much light is scattered instead.
Image resolution refers to the degree of detail an image holds, represented by the number of pixels. Higher resolutions make images sharper, as they increase the number of pixels used to represent images, which adds more detail to them. Screen size can drastically affect the sharpness of an image; if the screen is small enough, even low-resolution images can become nearly identical to far higher-resolution images.
An object that transforms VR Models based on certain conditions and logic, increasing the effectiveness by which the data is communicated by visualisation.
Software Development Kit a range of tools or a platform allowing developers to create software or technology for themselves.
Controls the intensity of colours in the rendered image.
Projection.
An overlay of the scene.
Generates visuals in the scene to show the animation paths of targets and to permit keyframe positions to be edited visually.
A particular order in which related things follow each other.
Left half window is left eye, right half is right eye. Pixels are 2:1 ratio.
Left half window is left eye, right half is right eye. Pixels are square.
Environment cues with the added purpose of helping the user to interpret the virtual environment.
Singe-pass stereo - render both eyes in parallel on the primary GPU.
How smooth or rough the surface is.
A fraction.