Three Dimensional And Virtual Reality Technology Computer Science Essay

Virtual world ( VR ) is a engineering that it supply interface between worlds and computerized applications based on real-time and 3-dimensional ( 3D ) graphical universes. The interaction of 3-dimensional ( 3D ) graphic is an of import factor of the Virtual Reality. There are 3 types of the Virtual Reality that is desktop, projection and immersive.

First, the desktop Virtual Reality that it let user to interact and construct a Virtual World via personal computing machine. It display 3D practical universe on personal computing machine without any particular movement-tracking devices or equipment. Nowadays, many computing machine games can be used every bit illustration as desktop practical world. The game likes ( Figure 1 ) , it is a 3D practical game and inside the character and object all is 3D in writing. User can utilize assorted triggers and antiphonal characters to do the user feel as though they are in a practical universe.

Figure 1: The Sim 3. The 3D Human theoretical accounts are unrecorded in the practical universe. [ Beginning: hypertext transfer protocol: //www.tuaw.com/2009/02/06/the-sims-3-coming-to-mac-and-iphone-summer-2009/ ]

Second, projection Virtual Reality that it displays the practical universe on large-screen stereoscopic artworks and polarising the 3D graphic from a brace of conventional picture projectors. It play really of import function in existent universe, it help human cut down many unsafe work like aircraft practical. Human can utilize the aeroplane simulation system to as his aircraft practical. The below image ( Figure 2 ) is the projection of Virtual Reality.

Figure 2: The above is an illustration of projection Virtual Reality. [ Beginning: hypertext transfer protocol: //www.converj.com/sites/converjed/2006/05/vr_rooms.html ]

Last, Immersive Virtual Reality is a more world construct of the Virtual Reality engineering. Immersive Virtual World is utilizing featuring head-mounted shows and synergistic baseball mitts as it display or synergistic equipment. Example like the image ( Figure 3 ) is Immersive Virtual Reality. The Polhemus Fastrak Systemmakes caput and manus place are path and provender into the 3D universe. It make user can more experience that they are being at that place or experience they are partial of them. Immersive Virtual Reality aid homo to work the natural human accomplishments like limb motion, gestures and stereoscopic vision. In order to allow human excessively have more experience on it.

Figure 3: The user is have oning stereoscopic, featuring head-mounted show and synergistic baseball mitts to command the object inside the practical universe. [ Beginning: hypertext transfer protocol: //medgadget.com/archive s/2006/11/virtual_reality_3.html ]

Augmented World

Augmented world ( AR ) is combinationA of a physical real-world environment ‘s elements and practical computer-generated imagination and is a turning country in practical world. In general it stacks artworks on a existent or existent universe environment in existent clip. It makes the user ‘s position of the universe and the computing machine interface combination becomes one scene.

Augmented world presented to the user strengthens that individual ‘s public presentation in and perceptual experience of the universe. It end is to make a system make user can non differential between the existent universe and practical graphic and looking at a individual existent scene. The existent universe environment provides affluent information for us that are harder to replica in a computing machine. These universes are oversimplification such as the environments created for immersive games and amusement, or the system that can make a more realistic environment that has a really expensive monetary value like an aircraft simulators. Augmented world combination the existent scene viewed of user and a practical scene that generated from computing machine that augments the scene with extra information.

Figure 4: The above diagram is Reality-Virtuality Continuum.

Presents, the Marker-based augmented world is widely in used. Marker-based augmented world is utilizing a camera and a shaper as to find the place and way of the theoretical accounts. Marker-based augmented world system ( figure 5 ) is utilizing a toolkit called ARToolkit for accommodation camera in the augmented world system utilizing marker.

Figure 5: The above image is about marker-based augmented world. The combination of the existent human manus and the 3D theoretical account are augmented world. [ Beginning: hypertext transfer protocol: //www.psfk.com/2009/03/baseball-cards-add-augmented-reality.html ]

Augmented Reality V. Virtual World

Virtual world is complete submerging in digital universe. However, augmented world is digital stack on the existent universe. Augmented world augmented the existent universe with digital informations such as 3D artworks. It is much interesting than the completely concoct or forge environment. The different between augmented world and practical world is the augmented world is closer to the existent universe. It is because it contains the most portion of existent universe component and minority computer-generated images.

The issues of both practical world and augmented world, both of it faced a similar job in converting the user. Virtual world needs to construct the inside informations of the whole universe. It can be expensive to false in the face of danger. However, the augmented world faced that computer-generated images must demo the exposure of realistic to hone combine with the existent universe component. Furthermore, the augmented world is a existent clip system, it generate the 3D graphic in existent clip. So, the combination of the graphic is need to be really precise, if any slowdown or imprecise occur in the system will interrupt the experience and the worlds.

The show device for the both augmented world and practical world are different. Augmented world show device is less demand demand than the practical world system demand. It is because the augmented world does non replace the existent universe. In augmented world, a simple image possibly sufficient to be portion of the augmented world. However, the practical world is needed a complete 3D graphic.

From old instances that show that augmented world had lower demands than practical world. Virtual world is non the instance for tracking and feeling. In this instance, the demand of the augmented world is much rigorous than those for practical world systems. A major ground for this is the enrollment job.

Basic Subsystems

Virtual World

Augmented World

Scene Generator

More Advanced

Less Advanced

Display Device

High Quality

Low Quality

Tracking and Feeling

Less Advanced

More Advanced

Table 1: Comparison of demands of Augmented Reality and Virtual Reality

Augmented World System

Augmented world, there are few field are of import to make an augmented world system. Computer graphic, computing machine vision and user interface are lending to construct an augmented world systems.

Typical Augmented Reality System

Figure 6 – Components of an Augmented World System

Perspective projection of the 3D universe onto a 2D image plane executes by the picture camera. The picture camera gets the place, airs, focal length, and lens deformation to cipher what projected onto its image plane. Computer graphics system finish the coevals of this practical image. Modeling of practical objects in the frame of mention objects. Graphics systems needs the existent scene imaging information and render the objects right. This information controls the composite camera and usage to bring forth practical object image. The practical object image will unite with the existent universe object to accomplish augmented world.

Performance Issues

Augmented world system let user to travel freely in the scene and see the decently rendered in writing. So that the existent clip system use in the augmented world is preferred. The two public presentations standard must see in the augmented world system is the bring forthing augmenting image update rate and enrollment existent and practical image preciseness. Real-time restraint is a practical object render can non hold any seeable leap when user is sing an augmented image. If the in writing system able renders the practical scene in at least 10 times per second, so it can accomplish no any seeable leap. This method is running good in the simple to medium artworks scenes. The more photorealistic artworks rendering is required, the more world of the practical object in the scene.

There is holding two possible instances failure in the 2nd public presentation criterion which is misregistration with the existent and practical scene and clip holds in the augmented world system.

Noise in the system is the cause of misregistration of the existent and practical scene. The place and orientation of the camera aspects of the existent scene must be cognizant of. The noise measuring chance has enrollment mistake of the practical and existent image scene. The Fluctuations of values will do jitter in the viewed image. The augmented world is sensitive with ocular mistake because the practical object is non fixed in the existent scene or it would be place wrongly. Even pixel Misregistrations can observe the right conditions.

Misregistration is clip holds in the system is another one factor of failure in the 2nd public presentation criterion. The first paragraph had mentioned 10 times per second is the suited real-time public presentation, if less than that the render object will hold jittered. The practical object will jitter or system laggy because of the put-off computation of the camera place or wrong positioning in writing camera. Augmented world system should plan as cut down the hold to hold a good real-time public presentation.

Display Technologies

Alternative Augmented Reality Approaches: Concepts, Techniques, and Applications —

There have 3 types of show engineerings for the augmented world that is Monitor Based show, Video See-through Display and Optical See-through Display. The proctor based show is the most common usage in presents. The equipments are besides cheaper and easy discovery.

Monitor Based Display

Figure 7 – Proctor Based Augmented Reality

Monitor Based augmented world besides called as “ window on the universe ” . Monitor Based augmented world usage composite picture and show object on a regular proctor. Monitor Based augmented world shows are outward, no deformation and supply a distant position on the existent universe.

Video See-through

Figure 8 – Video See-through Augmented Reality Display

There are holding two types of Head-mounted shows ( HMD ) and they called picture see-through and optical see-through. Diaphanous term is average to let user see the existent universe scene show in forepart of user when user is have oning HMD. HMD allow user have a complete ocular isolation. The HMD usage picture camera to aline with the show to capture the existent universe scene. This show engineering is similar with the proctor based show.

Optical See-through

Figure 9 – Optical See-through Augmented Reality Display

In order to happen the true position on the universe phase light the Optical diaphanous eliminates video channel. Optical diaphanous optically display the combination of existent universe object and practical object in forepart of the user. Optical see-through are same with caputs up shows ( HUD ) . HUD frequently appears in aircraft cockpits and experiments auto. Optical diaphanous show existent universe scene is instantly and it impossible to do up for system holds.

Tracking Requirements -hvnt

With an augmented world system the enrollment is with the ocular field of the user. The type of show used by the augmented world system will find the truth needed for enrollment of the existent and practical images. The cardinal fovea of a human oculus has a declaration of about 0.5 min of discharge. In this country the human oculus is capable of distinguishing jumping brightness sets that subtend one minute of discharge. That capableness defines the ultimate enrollment end for an augmented world system. The declaration of the practical image is straight mapped over this existent universe position when an optical diaphanous show is used. If it is a proctor or video-see through show so both the existent and practical universes are reduced to the declaration of the show device. The current engineering for caput tracking specifies an orientation truth of 0.15 short of what is needed to keep individual pel alliance on augmented world shows. These magnetic trackers besides introduce mistakes caused by any environing metal objects in the environment. This appears as an mistake in place and orientation that can non be easy modeled and will alter if any of the interfering objects move. In add-on, measuring holds have been found in the 40 to 100 milliseconds scope for typical place detectors which is a important portion of the 100 millisecond rhythm clip needed for real-time operation.

Augmented Reality Main Process

The augmented world has two of import procedures that are Recognition Marker and Rendering. These two procedures make up augmented world system. Below is the measure of augmented world system.

Measure of augmented world system.

Topographic point the marker in forepart of the webcam. It let the webcam to capture the marker.

The algorithm will compare the detected marker with the library marker.

If the marker is in the library, so will render object on the marker.

Figure 10 – Marker-based Augmented Reality system process measure.

Recognition Marker

hypertext transfer protocol: //studierstube.icg.tu-graz.ac.at/thesis/marker_detection.pdf

marker_detection ed.pdf

Recognition marker is besides called marker sensing. Augmented world system marker sensing map are require marker, the picture camera to capture the marker and algorithm to verify the marker. Marker design is the of import portion on this sensing map. The markers environments by a square black and thick boundary line and inside the boundary line consist of the form.

Maker sensing map, the first measure is to observe the marker black boundary line that is observing the continuously group of pels under a certain grey value threshold. Finally, the lineation of each group is cod and so those group surrounded by four ruddy heterosexual lines. It means the marker is marked as possible markers. The possible marker four corners are used to cipher a homographic. The intent is to take the perspective deformation. The marker internal form bring to a typical forepart position one can try a grid of N x N section ( 16×16 section or 32×32 section ) grey values inside. Those grey values build up a characteristic vector that is a library of characteristic vectors of known markers by correlativity. The end product of this templet matching is a assurance factor. If the marker is found that mean the threshold is less than the assurance factor. The inter-marker confusion rates and high false positive will do by designation mechanism and marker confirmation utilizing correlativity. The marker singularity is reduced if increases the library size that will besides do increase the inter-marker confusion rate.

Figure 11 – The matrix barcode.

Figure 12 – Treshold

Rendering

Augmented world a practical usher book —

Rendering in augmented world is a technique to utilize to expose the object based on the right marker. After the marker sensing had detected the marker. It finds the airs and set the theoretical account position matrix and so rendering operations appear comparative to the array and relation to the 3D universe. That means the markers allow the augmented world algorithm to cipher where the input of the practical object and so that practical object can expose good in the augmented universe. In the library of marker, the computing machine vision package to happen them in image and some map is to assist to lade and expose the 3D theoretical account.

Other than that, the augmented world is able to find the angle and place of the marker. It can utilize this information to find the right place and angle of the 3D object. After that, when the calculation is complete, the 3D object will expose and it will stack on the top of the camera image and the augmented world is form. If the augmented world system renders the 3D object right, the 3D will look in the universe scene. Once the place and angle is calculated, the 3D object can be adjusted based on the marker moved around the 3D universe.

Figure 13 – Render the object on the marker. [ Beginning: hypertext transfer protocol: //www.computergraphica.com/2006/ 12/20/osgart-10-augmented-reality-software-development-kit/ ]

Augmented Rendering Techniques

The practical universe theoretical accounts and existent universe theoretical accounts are specified in an affine representation. There have four non-coplanar points in the scene to specify affine frame. Affine frame of mention in a given on behalf of four or more non-coplanar 3-dimensional point set, the aggregation of any point projection can be calculated as a set of four points in the additive combination. Affine representation that allows calculates the projection of a point without any camera place information or petition information on the standardization parametric quantities. Affine represents is retained under the affine transmutation the object belongingss merely. In the affine transmutation line, intersections of lines and planes and correspondence are retained. Geometric invariants technique had used in augmented world system. Augmented world system used invariant and 2D image sweetening on the stack. This work was expanded to include 3D rendering and practical objects. To properly integrate into a existent scene images and practical image to cut down the job:

Track footing of affine point used for transmutation.

Calculate the affine representation

Calculate the practical objects projection as additive combinations of the projections of the affine footing points.

Camera Viewing Geometry

In augmented world, to project a practical object accurately are necessitating to cognize the precise combined consequence of the three transmutations. It utilizing these three transmutations to expose on several image planes is necessary. The below Figure 14 are the three transmutation.

Object-to-World

World-to-Camera

Camera-to-Image

Figure 14 – Three of import transmutations for the practical objects, existent universe, camera, and the image it produces.

= P3x4 C4x4 O3x4

Equation 1- Homogeneous co-ordinates this projection

Equation

Description of Homogeneous Transformation

[ X Y Z 1 ] TA

Homogeneous co-ordinates for a practical point.

[ u vA 1 ] TA

Projection in the artworks image plane.

O3x4

Virtual object co-ordinates to World Coordinates ( object-to-world ) .

C4x4

World coordinates to Camera co-ordinates ( world-to-camera ) .

P3x4

Projection operation of the man-made artworks camera onto the artworks image plane. ( Matrix patterning the object ‘s projection onto the image plane ) .

Table 2 – Description of the Homogeneous Transformation Equation 1.

The same look exists forA existent point projection to camera image plane. Equation 1 assumes mention frame defined for each are independent and non related for each other. Four non-coplanar points defined the frame, in camera position and visually maintain path of all the picture frames. Virtual points and existent points in infinite usage individual homogenous transmutation.

= i??3×4

Equation 2

Equation

Description of Homogeneous Transformation

[ x Y zA 1 ] T

[ X Y Z 1 ] TA point transformed to affine co-ordinates.

i??3×4

Projected onto the image plane of a 3D affine point. ( Combined effects of the alteration in the object ‘s representation every bit good as the object-to-world, world-to-camera and projection transmutations. )

Table 3 – Description of the Homogeneous Transformation Equation 2.

The projection matrix elements ( i??3×4 ) to be the fiduciary point image co-ordinates. Therefore the fiduciary points image location dwell the information that need to project the 3D object.

Affine Representation

The Affine representations allow the reproject the point without cognizing the camera place. It besides allow without holding any metric information about the points ( e.g. , 3D distances between them ) . The aggregation of points, P1, aˆ¦ , Pn Iµ R3, n a‰? 4, at least four of which are non-coplanar. This point is representation of affine representation and does non alter. If applied to all the points of the same non-singular additive transmutation ( e.g. , interlingual rendition, rotary motion, scaling ) .

The four non-coplanar points ( p0A … p3 ) in the augmented world system is represented in an affine mention frame. The p0 is origin point and p1, p2A andA p3 as affine footing points. The p0 assigned homogenous affine [ 0 0 0 1 ] TA coordinates. TheA p1, p2, p3A are assigned affine co-ordinates of [ 1 0 0 1 ] T, [ 0 1 0 1 ] T, [ 0 0 1 1 ] TA severally. The associated affine footing vectors areA p1A = [ 1 0 0 0 ] T, A p2A = [ 0 1 0 0 ] T, andA p3A = [ 0 0 1 0 ] T. post exchange is represented asA pxA =A xp1A +A yp2A +A zp3A +A p0A where [ x y zA 1 ] TA are the homogenous affine co-ordinates for the point. This is an affine footing vectors additive combination.

Affine Reprojection

Augmented world system used augmented world affine reprojection to make the other point projection are defined in affine co-ordinate frame. Reprojection used in calculated the practical object projection point. The 3D projection point at the two camera place and cipher the projection its ‘ point at the 3rd camera place.

Equation 3 – Projection equation.

Equation

Description of Equation

Average distance for the projection centre to object point.

It used to scale the image, if bbject distance from the camera is little than the camera size.

Table 4 – Description of the Equation 3.

This Representation is of import because it can build for any practical object without any information about the three transmutations ( Figure 14 ) .It require tracking full frames fiduciary point which is non-coplanar. Then used the weak position projection theoretical account modeled the camera-to-image transmutation. The camera is to come close the perspective projection procedure by utilizing weak perspective projection theoretical account. The affine representation used the point to let repojection for calculate the affine point without awareness the place of the camera and the camera standardization parametric quantity. The affine frame define by the four point projection is need to make up one’s mind the symbol i?? ( Equation 2 ) . The reprojection belongings is shown in Figure 15.

Equation 4

Equation

Description of Equation

Im

Image.

[ upiA vpiA 1 ] TA

Affine footing point. Projection of the four points, piA iA = 0… 3.

[ upA vpA 1 ] TA

Projection.

[ x Y zA 1 ] T

Homogeneous affine co-ordinates.

Table 4 – Description of the Equation 4.

Figure 15 – Affine Point Reprojection

The Equation 2 of the projection matrix P clearly defined can be seen in the Equation 4. It calculated a 3D point projection of new image there is viewed by camera as a additive combination of the projections of the affine footing points. Affine frame is necessitating the four point image projection and 3D point affine co-ordinates. Affine frame provide the co-ordinates values by utilizing ocular trailing modus operandis initialized on the characteristic points.

Affine Reconstruction

Affine coordinates each point computation fromA Equation 4. Projection that require minimal two positions and affine footing points projections are needed. The Equation 4 is needed by the consequence of the over-determined system of equations because it refers it to make computation. Two position, I1, A I2, of a scene in which the affine footing points projections ( p0, … , p3 ) are known. Therefore, the affine coordinates [ x Y zA 1 ] TA for any pointA pA can be found from the solution of the undermentioned equation:

Equation 5

Equation

Description of Equation

Im

Image.

[ upiA vpiA 1 ] TA

Affine footing point. Projection of the four points, piA iA = 0… 3.

[ upA vpA 1 ] TA

projections of pointA pA

[ x Y zA 1 ] T

Homogeneous affine co-ordinates.

Table 4 – Description of the Equation 5.

Affine Depth

Affine preparations required tracking footing points and render the practical object to augment the existent image. So that, it the needed used the computing resources. A good rendition algorithm generates a practical object good. To render a real-time complex graphic scenes the computing machine in writing system is require and it needs hardware support to make rendering operations. Z-buffering requires a sequence all points in deepness to artworks image undertaking to the same pel. It had done by issue the value of each point for the deepness with the man-made artworks camera optical axis. Affine point representation define all the point in this the system. It is in order to retain the deepness of the point. Orthographic artworks camera renders the practical objects. The omega value is independent of the size of the order of merely a point must be maintained. Determine camera optical axis as the 3D line whose points all undertaking to the same point in the image to acquire the deepness telling. The homogenous vector [ I¶TA 0 ] TA is optical axis. TheA I¶A is given as the cross merchandise

Equation 6

The projection matrixA i?? foremost and 2nd rows are the first three elements of the two vectors. P ‘ = pA +A a [ I¶TA 0 ] T are the interlingual rendition of any pointA of the p.A The each point of the projection in the image topographic point are similar with the p projection point. Z-buffering usage the deepness value to delegate each point of P as the productA P [ I¶TA 0 ] T. It will let the affine point to finish projection. It expressed as the Equation 7.

Equation 7 – Visible surface rendition of a point P on an affine object.

The P ‘s projection and are the image co-ordinates and P ‘s assigned z-value for the w. 4×4 signifier as the sing matrix of transmutations of in writing objects perform by computing machine artworks systems. The upper left 3×3 submatrix has a derived function between artworks system sing and affine projection matrix. In Equation 7, submatrix is a general invertible transmutation. Submatrix work with a Euclidian frame of mention so it is a rotary motion matrix. Real-time rendition of objects used standard artworks hardware mention developed affine frame. Silicon Graphics Reality Engine is a artwork processor. It can execute object rendering with z-buffering for concealed surface remotion.