Nintendo DS / 3DS / Wii / Wii U

Re: Nintendo revolution

Video game play using panoramically-composited depth-mapped cube mapping


Abstract
Video game play rendered using a panoramic view of a cube map style rendering uses an associated depth map to supply three-dimensionality to the pre-rendered scene. The resulting panoramic rendering may be indistinguishable from rendering the original scene in real-time except that the background is of pre-rendered quality.



--------------------------------------------------
------------------------------
Inventors: Donnelly, Paul; ( Bellevue, WA)
Correspondence Name and Address: NIXON & VANDERHYE, P.C.
1100 N. GLEBE ROAD
8TH FLOOR
ARLINGTON
VA
22201
US


Assignee Name and Adress: Nintendo Co., Ltd.
Minami-ku
JP


Serial No.: 636978
Series Code: 10
Filed: August 8, 2003

U.S. Current Class: 345/419
U.S. Class at Publication: 345/419
Intern´l Class: G06T 015/00




--------------------------------------------------
------------------------------

Claims


--------------------------------------------------
------------------------------


I claim:

1. A video game playing method comprising: loading a pre-determined environment-mapped image having multiple planar projected images and associated multiple depth map images; at least in part in response to real-time interactive user input, compositing at least one additional object into said mapped images, said compositing using said depth map to selectively render at least portions of said object into said mapped image to provide a composited mapped image; and panoramically rendering said composited mapped image using a desired viewing angle and frustum to provide interactive video game play.

2. The method of claim 1 wherein the environment-mapped image comprises a cube map.

3. The method of claim 1 wherein said mapped image is pre-rendered.

4. The method of claim 1 including performing said compositing step with a home video game system, and further including displaying rendering results on a home color television set.

5. The method of claim 1 further including receiving interactive real-time user inputs via at least one handheld controller, and defining animation of said object in response to said inputs.

6. The method of claim 1 wherein said rendering step comprises applying said post-composited mapped image to a mesh, and rendering said mesh using a current projection matrix to apply desired viewing angle and frustum parameters.

7. The method of claim 1 further including testing whether said object intersects any of multiple faces of said mapped image in at least two dimensions.

8. The method of claim 1 further including performing Z comparison and removing hidden surfaces in response to said Z comparison.

9. The method of claim 1 further including performing said rendering using a frame buffer.

10. A storage medium storing instructions that, when executed by a home video game system or personal computer, provide interactive real-time video game play on a display, said storage medium storing: at least one pre-rendered cube map; at least one pre-rendered depth map corresponding to said cube map; and instructions which, when executed, composite at least portions of said pre-rendered cube map with at least one dynamically-generated object, said compositing based at least in part on said pre-rendered depth map, and generating pre-rendered panoramic image therefrom.

11. The storage medium of claim 10 wherein said pre-rendered cube map comprises six images as if looking through faces of a cube with the viewpoint at the center of the cube.

12. The storage medium of claim 10 wherein said compositing includes rendering a real-time object into a frame buffer storing said pre-rendered cube map using dynamically defined view port and/or frustum parameters.

13. A video game playing system comprising: means for loading a pre-determined environment-mapped image having multiple planar projected images and associated multiple depth map images; means for at least in part in response to real-time interactive user input, compositing at least one additional object into said mapped images, said compositing using said depth map to selectively render at least portions of said object into cube mapped image to provide a composited mapped image; and means for panoramically rendering said composited mapped image using a desired viewing angle and frustum to provide interactive video game play.

--------------------------------------------------
------------------------------

Description


--------------------------------------------------
------------------------------


CROSS-REFERENCES TO RELATED APPLICATIONS

[0001] Priority is claimed from application No. 60/468,645 filed Feb. 13, 2003 , which is incorporated herein by reference.

FIELD

[0002] The technology herein relates to video game play, and more particularly to efficient 3D video game rendering techniques using pre-rendered cube or other environment maps.

BACKGROUND AND SUMMARY

[0003] Modern home video games are more exciting and realistic than ever before. Relatively inexpensive 3D home video game platforms contain as much processing power as advanced computer graphics workstations of yesteryear. They can dynamically, interactively produce rich, realistic interesting displays of characters and other objects moving about in and interacting with a simulated three-dimensional world. From your living room on a home color television set, you can now fly virtual fighter planes and spacecraft through simulated battles, drive virtual race cars over simulated race tracks, ride virtual snowboards over simulated snow fields and ski slopes, race simulated jet skis over simulated water surfaces, and journey through exciting virtual worlds encountering all sorts of virtual characters and situations--just to name a few examples--all with highly realistic and exciting 3D images.

[0004] While home video game systems are now relatively powerful, even the most advanced home video game system lacks the processing resources that video game designers dream about. Home video game systems, after all, must be affordable yet have the extremely demanding task of producing high quality images dynamically and very rapidly ( e.g., thirty or sixty frames per second). They must respond in real-time to user controls--reacting essentially instantaneously from the standpoint of the user. At the same time, video game developers and their audiences continually desire ever richer, more complex, more realistic images. More complicated and detailed images place exceedingly high demands on relatively inexpensive video game system processors and other internal circuitry. If the image is too complicated, the video game platform will not be able to render it in the time available. This can sometimes result in incomplete, only partially rendered images that may appear flawed and unrealistic and thus disappoint users.

[0005] One approach to help solve this problem involves pre-rendering complicated scenes in advance and then adapting or otherwise manipulating those scenes in real-time to provide interactive video game play. For example, it is possible for a video game designer at the time he or she creates a video game to use a very powerful computer to pre-calculate and pre-render background scenery and other images. Developing a new video game can sometimes take a year or more, so using a high-power computer graphics workstation or even a supercomputer for hours at a time to render individual complicated background images is feasible. Such interesting and complicated background images are then often essentially " pasted" onto 3D surfaces through use of texture mapping during real-time video game play. This technique can be used to provide rich and interesting background and other video game play elements without a corresponding substantial increase in real-time processing overhead.

[0006] While such pre-rendered texture maps have been used with substantial advantageous results in the past, they have some shortcomings in interactive video game play. For example, texture-mapping a pre-rendered image onto a 3D surface during interactive video game play can successfully create impressive visual complexity but may let down the user who wants his or her video game character or other moving object to interact with that complexity. The tremendous advantageous 3D video games have over 2D video games is the ability of moving objects to interact in three dimensions with other elements in the scene. Pre-rendered textures, in contrast, are essentially 2D images that are warped or wrapped onto 3D surfaces but still remain two-dimensional. One analogy that is apt for at least some applications is to think of a texture as being like a complex photograph pasted onto a billboard. From a distance, the photograph can look extremely realistic. However, if you walk up and touch the billboard you will immediately find out that the image is only two dimensional and cannot be interacted with in three dimensions.

[0007] We have discovered a unique way to solve this problem in the context of real-time interactive video game play. Just as Alice was able to travel into a 3D world behind her mirror in the story " Alice Through the Looking Glass", we have developed a video game play technique that allows rich pre-rendered images to create 3D worlds with depth.

[0008] In one embodiment, we use a known technique called cube mapping to pre-render images defining a 3D scene. Cube mapping is a form of environment mapping that has been used in the past to provide realistic reflection mapping independent of viewpoint. For example, one common usage of environment mapping is to add realistic reflections to a 3D-rendered scene. Imagine a mirror hanging on the wall. The mirror reflects the scene in the room. As the viewer moves about the room, his or her viewpoint changes so that different objects in the room become visible in the mirror. Cube mapping has been used in the past or provide these and other reflection effects.

[0009] We use cube mapping for a somewhat different purpose--to pre-render a three-dimensional scene or universe such as for example a landscape, the interior of a great cathedral, a castle, or any other desired realistic or fantastic scene. We then add depth to the pre-rendered scene by creating and supplying a depth buffer for each cube-mapped image. The depth buffer defines depths of different objects depicted in the cube map. Using the depth buffer in combination with the cube map allows moving objects to interact with the cube-mapped image in complex, three-dimensional ways. For example, depending upon the effect desired, moving objects can obstruct or be obstructed by some but not other elements depicted in the cube map and/or collide with such elements. The resulting depth information supplied to a panoramically-composited cube map provides a complex interactive visual scene with a degree of 3D realism and interactivity not previously available in conventional strictly 2D texture mapped games.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] These and other features and advantages will be better and more completely understood by referring to the following detailed description of exemplary non-limiting illustrative embodiments in conjunction with the drawings of which:

[0011] FIGS. 1 and 2 show an exemplary non-limiting video game playing system;

[0012] FIGS. 3A and 3B show exemplary flowcharts of non-limiting illustrative video game processes;

[0013] FIGS. 4A-4C show exemplary illustrative pre-rendering examples;

[0014] FIGS. 5A-5E show exemplary cube-map and real-time character compositing process examples;

[0015] FIGS. 6A and 6B show exemplary illustrative final rendering examples; and

[0016] FIGS. 7A-7G show example interactivity between a 3D object and an exemplary 3D scene defined by a illustrative depth-mapped panoramically-composited cube map.

DETAILED DESCRIPTION

[0017] FIG. 1 shows an example interactive 3D computer graphics system 50. System 50 can be used to play interactive 3D video games with interesting stereo sound. It can also be used for a variety of other applications.

[0018] In this example, system 50 is capable of processing, interactively in real-time, a digital representation or model of a three-dimensional world. System 50 can display some or all of the world from any arbitrary viewpoint. For example, system 50 can interactively change the viewpoint in response to real-time inputs from handheld controllers 52a, 52b or other input devices. This allows the game player to see the world through the eyes of someone within or outside of the world. System 50 can be used for applications that do not require real-time 3D interactive display ( e.g., 2D display generation and/or non-interactive display), but the capability of displaying quality 3D images very quickly can be used to create very realistic and exciting game play or other graphical interactions.

[0019] To play a video game or other application using system 50, the user first connects a main unit 54 to his or her color television set 56 or other display device by connecting a cable 58 between the two. Main unit 54 in this example produces both video signals and audio signals for controlling color television set 56. The video signals are what controls the images displayed on the television screen 59, and the audio signals are played back as sound through television stereo loudspeakers 61L, 61R.

[0020] The user also connects main unit 54 to a power source. This power source may be a conventional AC adapter ( not shown) that plugs into a standard home electrical wall socket and converts the house current into a lower DC voltage signal suitable for powering the main unit 54. Batteries could be used in other implementations.

[0021] The user may use hand controllers 52a, 52b to control main unit 54. Controls 60 can be used, for example, to specify the direction ( up or down, left or right, closer or further away) that a character displayed on television 56 should move within a 3D world. Controls 60 also provide input for other applications ( e.g., menu selection, pointer/cursor control, etc.). Controllers 52 can take a variety of forms. In this example, controllers 52 shown each include controls 60 such as joysticks, push buttons and/or directional switches. Controllers 52 may be connected to main unit 54 by cables or wirelessly via electromagnetic ( e.g., radio or infrared) waves.

[0022] To play an application such as a game, the user selects an appropriate storage medium 62 storing the video game or other application he or she wants to play, and inserts that storage medium into a slot 64 in main unit 54. Storage medium 62 may, for example, be a specially encoded and/or encrypted optical and/or magnetic disk. The user may operate a power switch 66 to turn on main unit 54 and cause the main unit to begin running the video game or other application based on the software stored in the storage medium 62. The user may operate controllers 52 to provide inputs to main unit 54. For example, operating a control 60 may cause the game or other application to start. Moving other controls 60 can cause animated characters to move in different directions or change the user´s point of view in a 3D world. Depending upon the particular software stored within the storage medium 62, the various controls 60 on the controller 52 can perform different functions at different times.

[0023] Example Non-Limiting Electronics and Architecture of Overall System

[0024] FIG. 2 shows a block diagram of example components of system 50. The primary components include:

[0025] a main processor ( CPU) 110,

[0026] a main memory 112, and

[0027] a graphics and audio processor 114.

[0028] In this example, main processor 110 ( e.g., an enhanced IBM Power PC 750 or other microprocessor) receives inputs from handheld controllers 108 ( and/or other input devices) via graphics and audio processor 114. Main processor 110 interactively responds to user inputs, and executes a video game or other program supplied, for example, by external storage media 62 via a mass storage access device 106 such as an optical disk drive. As one example, in the context of video game play, main processor 110 can perform collision detection and animation processing in addition to a variety of interactive and control functions.

[0029] In this example, main processor 110 generates 3D graphics and audio commands and sends them to graphics and audio processor 114. The graphics and audio processor 114 processes these commands to generate interesting visual images on display 59 and interesting stereo sound on stereo loudspeakers 61R, 61L or other suitable sound-generating devices.

[0030] Example system 50 includes a video encoder 120 that receives image signals from graphics and audio processor 114 and converts the image signals into analog and/or digital video signals suitable for display on a standard display device such as a computer monitor or home color television set 56. System 50 also includes an audio codec ( compressor/decompressor) 122 that compresses and decompresses digitized audio signals and may also convert between digital and analog audio signaling formats as needed. Audio codec 122 can receive audio inputs via a buffer 124 and provide them to graphics and audio processor 114 for processing ( e.g., mixing with other audio signals the processor generates and/or receives via a streaming audio output of mass storage access device 106). Graphics and audio processor 114 in this example can store audio related information in an audio memory 126 that is available for audio tasks. Graphics and audio processor 114 provides the resulting audio output signals to audio codec 122 for decompression and conversion to analog signals ( e.g., via buffer amplifiers 128L, 128R) so they can be reproduced by loudspeakers 61L, 61R.

[0031] Graphics and audio processor 114 has the ability to communicate with various additional devices that may be present within system 50. For example, a parallel digital bus 130 may be used to communicate with mass storage access device 106 and/or other components. A serial peripheral bus 132 may communicate with a variety of peripheral or other devices including, for example:

[0032] a programmable read-only memory and/or real-time clock 134,

[0033] a modem 136 or other networking interface ( which may in turn connect system 50 to a telecommunications network 138 such as the Internet or other digital network from/to which program instructions and/or data can be downloaded or uploaded), and

[0034] flash memory 140.

[0035] A further external serial bus 142 may be used to communicate with additional expansion memory 144 ( e.g., a memory card) or other devices. Connectors may be used to connect various devices to busses 130, 132, 142.

[0036] Exemplary Non-Limiting Video Game Panoramic Compositing Technique

[0037] FIG. 3A shows an example flowchart of illustrative non-limiting video game play on the system shown in FIGS. 1 and 2. The software used to control and define the operations shown in FIG. 3A may be stored in whole or in part on mass storage device 62 which is loaded into or otherwise coupled to video game system 50 before game play begins. In another example, the software may be downloaded over network 138 and loaded into internal memory of the video game system 50 for execution.

[0038] To play an exemplary illustrative video game, the user may depress a " start" or other control which causes the execution of program instructions to initialize game play ( FIG. 3A, block 302). Once the game has begun executing, the system 50 may acquire user inputs from handheld controllers 52 or other input sources ( FIG. 3A, block 304) and define corresponding viewing angle and frustum parameters corresponding to one or more virtual camera or other views ( FIG. 3A, block 306). The software may also define one or more moving objects such as moving characters and associated animation parameters ( FIG. 3A, block 308). The game may then render a frame onto display 56 ( FIG. 3A, block 310). Assuming the game is not over ( "no" exit to decision block 312), blocks 304-310 are repeated rapidly ( e.g., thirty or sixty times each second) to provide interactive video game play.

[0039] FIG. 3B shows an example " render" routine used by the FIG. 3A video game software to provide panoramic compositing allowing rich, complex 3D virtual scenes and worlds. In the exemplary illustrative embodiment, the first two blocks 320, 322 of FIG. 3B are performed in advance ( "pre-rendered") before real-time rendering, and the remaining blocks 324-332 are performed during real-time rendering. The pre-rendering blocks 320, 322 in one exemplary illustrative embodiment are not performed by video game system 50 at all, but rather are performed well in advance by another computer or graphics workstation that may for example use complicated, time-consuming non-real-time 3D rendering techniques such as ray tracing. In other exemplary embodiments, blocks 320, 322 can be performed as part of the " initialized game play" block 302 shown in FIG. 3A. In still other embodiments, if real-time processing resources are available, blocks 320, 322 can be performed by system 50 at the beginning of a game play sequence or the like.

[0040] To perform the pre-render exemplary process, a virtual cube 400 is defined within a virtual three-dimensional universe. As shown in illustrative FIG. 4A, virtual cube 400 may be defined within any realistic or fantastic scene such as, for example, the interior of a medieval cathedral. The cube 400 is used for cube mapping. A panoramic view is created using a cube map style rendering of the scene from a chosen location as shown in FIG. 4A to provide more camera freedom for pre-rendered games. This technique in one exemplary illustration keeps the viewpoint static but allows the player to look around in any direction.

[0041] In more detail, an exemplary 3D scene 402 is created using any conventional 3D modeling application. The scene is rendered out in six different images as if looking through the six different faces of cube 400 with the viewpoint at the center of the cube ( FIG. 3B, block 320). This produces a high-quality off-line rendered RGB or other color cube map 404 representation of the scene as shown in FIG. 4B. In exemplary illustrative embodiment, a depth map 406 of the same scene is also created based on the same six cube image faces ( see FIG. 4C and block 322 of FIG. 3B).

[0042] During real-time video game play ( FIG. 3B, block 324), video game system 50 loads the pre-rendered cube map face into its embedded frame buffer memory and loads the pre-rendered depth map corresponding to that face into a memory such as a Z ( depth) buffer ( FIG. 3B, blocks 324, 326). In one exemplary embodiment, the cube map 404 and depth map 406 such as shown in FIGS. 4B, 4C may be stored on mass storage device 62 or it may be downloaded over network 138.

[0043] Once these data structures are in appropriate memory of video game system 50, the video game software renders one or more real-time objects such as animated characters into the frame buffer using the same view point and frustum parameters in one exemplary embodiment to provide a composite image ( FIG. 3B, block 328). Such rendering may make use of the depth information 406 ( e.g., through use of a conventional hardware or software-based Z-compare operation and/or collision detection) to provide hidden surface removal and other effects.

[0044] This same process is repeated in the exemplary embodiment for each of the other cube-mapped faces to produce a post-composited cube map ( FIG. 3B, block 330). As will be appreciated by those of ordinary skill in the art, in at least some applications it is possible to reduce the number of cube mapped faces to composite by performing a rough " bounding box" or other type test determining which cube mapped faces ( e.g., one, two, three or a maximum) the moving object image is to be displayed within, thereby avoiding the need to composite these unaffected faces. There are cases where a character could span more than three faces of the cube. ( And if the final frustum is wide enough we could see more than three faces at once also.) It is the intersection between three things which is rendered in real-time and composited with the cube map in the exemplary embodiment:

[0045] the final frustum;

[0046] the moving/animating character ( which may be approximated using bounding boxes, convex hull, or other method);

[0047] the cube map faces, or predefined portions thereof ( in one example implementation we split each cube face into four pieces, thus reducing the amount of area that needed to be composited).

[0048] Once a complete post-composited cube map has been created, it is then applied to a cube or other mesh and rendered with the current projection matrix to create a panoramic image with desired viewing angle and frustum parameters ( FIG. 3B, block 332). See FIGS. 6A and 6B. In one exemplary illustrative non-limiting embodiment, this final rendering step simply applies the multiple cube map faces on the inside of a cube mesh and proceeds by panoramically rendering in a conventional fashion with a normal projection matrix using conventional hardware and/or software rendering techniques and engines. The result should be indistinguishable from rendering the original scene in real-time in the first place except that the background is at pre-rendered quality.

[0049] Panoramic Compositing--Example Implementation Details

[0050] Example Art Path for Panoramic Compositing:

[0051] 1. Pre-render the RGB and Depth buffers 404, 406 for each cube face from a CG tool such as Maya or other standard tool.

[0052] 2. Possible to use a resolution of for example 1024.times.1024 for each field of view ( "FOV") 90 degree cube face. If we consider the case where the viewing direction is towards the center of a face, we get a 1:1 texel to pixel ratio with an NTSC 640.times.480 screen with a horizontal FOV of 38.7.degree., and vertical FOV of 28.degree.. When we face the edges or corners of the cube, the texel to pixel ratio increases ( maximum 1.73 times more at the corners).

[0053] 3. In an example embodiment, due to frame-buffer size limitations, or for efficiency reasons it may be useful to perform the compositing in pieces less than the full size of a cube face, and copy these intermediate results to texture maps for final rendering in a separate pass. For example, each face of the original cube could be split into four pieces of 512.times.512 each. Other techniques could be used in other embodiments.

[0054] 4. Regarding the format of the Z-textures: One example implementation uses 16-bit Z textures. The value which must be stored in the Z texture is not actually Z, but can be: 1 far ( Z - near ) Z ( far - near )

[0055] 5. Another detail is worth mentioning regarding pre-rendered Z ( depth) buffers 406. With pre-rendered RGB buffers 404, we have the luxury of doing a fully super-sampled rendering to give smooth edges, and correct filtering. However, with Z we may have less resolution ( e.g., only one sample per pixel in some cases). When we try to composite our real-time graphics with the pre-rendered graphics, this can cause artifacts. The worst of these artifacts is when the Z value says a pixel is foreground, but the RGB average is closer to the background color. Then when a character is between the foreground and the background, we may get a halo of background color around foreground objects. In order to minimize this effect, we have found it is better to bias towards using the background Z value as the representative Z value for pixels whose RGB is a mixture of foreground and background colors. To achieve this kind of control over the source textures, one can render at a higher resolution initially and pre-process the data down to the final size.

[0056] This technique may be useful for another reason as well. For example, some conventional offline renderers may use jittered sampling, and thus the outlines of straight edged objects can sometimes become ragged unless rendered at a higher resolution initially and further processed to get a cleaner Z image. Other example embodiments may not be concerned with this issue.

[0057] 6. Optimization. There is no need to composite all panels or cube face images in the pre-pass. Only those which overlap both the final frustum and the real-time characters need to be composited and copied out.

[0058] Example Images

[0059] FIGS. 7A-7G show example images that are possible in the cathedral interior scene of FIG. 4A when an exemplary moving object 500 interacts with the cube-mapped scene. The moving object 500 shown in FIGS. 7A-7G is a cube for purposes of illustration only--any animated or non-animated object of any configuration could be used instead. FIG. 7A shows moving object 500 obstructing and being obstructed by different portions of the cube-mapped virtual 3D environment--in this case a railing 502 on the clere story of the virtual cathedral. FIG. 7B shows the same moving object 500 obstructing and being obstructed by a column 504 and also descending and thus being obstructed by portions of the tile floor surface 506 of the virtual cathedral. FIG. 7C shows the same cubic moving object 500 obstructing and being obstructed by ceiling details such as shown in FIG. 5D. FIG. 7D shows the moving object 500 obstructing and being obstructed by an archway detail 508. FIG. 7E shows moving object 500 obstructing and being obstructed by different portions of a pew 510. FIG. 7F shows moving object 500 obstructing and being obstructed by different portions of a column 512 adjacent the cathedral nave. FIG. 7G shows a magnified detail of the FIG. 7F image.

[0060] In the exemplary illustrative embodiment, images such as those shown in FIGS. 7A-7G can be created by giving the video game user control over moving object 500 so it can be moved anywhere within the three-dimensional scene defined by the depth-mapped panoramic cube map environment. While the exemplary moving object 500 shown in FIGS. 7A and 7G has the characteristic of being able to pass through virtual solid structure in order to better illustrate hidden surface removal, it is also possible to provide other characteristics such as for example collision detection so the moving object can bounce off or otherwise interact with the depth of the panoramically-rendered scene.

[0061] Example Enhancements to Panoramic Compositing

[0062] Example Enhanced Antialiasing:

[0063] The quality of composited renderings could be improved with better antialiasing. This can be achieved by allowing multiple Z values and multiple color values for edge pixels of foreground objects.

[0064] We would get a lot of benefit even with just two Z values per pixel. This allows a high quality solution to the halo artifacts which occur when real-time CG characters are positioned between background and foreground pre-rendered elements in 1-depth-sample per pixel compositing.

[0065] The following algorithm can be used to render anti-aliased edges in the two depth samples per pixel case.

[0066] 1. Render the furthest depth value and RGB value first.

[0067] 2. Composite the real-time character as usual.

[0068] 3. Alpha-blend on the foreground edges--alpha comes from foreground coverage value.

[0069] Note that foreground edges occupy only a small percentage of the pixels, so the antialiasing pass does not need to consume a full frame´s worth of fill-rate bandwidth.

[0070] Example Panoramic Movie Compositing

[0071] It may also be desirable to use movie textures to further enhance the realism, and promote the illusion that the full scene is being rendered in real-time. In many cases, it would be desirable to be able to specify particular regions of the scene to be animated, to help minimize the storage, bandwidth and decompression costs.

[0072] If the animations are restricted to lighting only, RGB animation is sufficient. If we want animated foreground objects then animated Z-textures can be used.

[0073] While the technology herein has been described in connection with exemplary illustrative non-limiting embodiments, the invention is not to be limited by the disclosure. For example, the pre-rendered environment could be a shape other than a cube. In some cases it would not be necessary to cover the whole panorama. For example if the game did not require the camera to point at the floor or ceiling, a tessellated cylinder could be an efficient shape to use for the pre-rendered environment. Example embodiments can work, for example, with environments composed of multiple planar projected images to achieve a wide degree of camera directional freedom using single planar images. As another example, the technique does not rely on the source material being pre-rendered; the environment could conceivably originate from real world photographs, and associated depth-captured images for example. The invention is intended to be defined by the claims and to cover all corresponding and equivalent arrangements whether or not specifically disclosed herein.
 
Re: Nintendo revolution

From Engadget.com:

There’s a bit of interesting info buried inside the press release for the Nintendo Revolution, that seems to imply they might be giving DIY developers some freedom to run homebrew code: “Freedom of design: A dynamic development architecture equally accommodates both big-budget, high-profile game ‘masterpieces’ as well as indie games conceived by individual developers equipped with only a big idea.” They make it a small point, and Nintendo sure hasn’t been talking up this aspect — but if there’s any truth to it, this could be a fairly big move in the gaming console world, to start with an intentionally open platform right off the bat (as opposed to waiting for all the brilliant hackers to inevitably tear your console to shreds, anyway…).

This looks interesting though.... Complete programming freedom.

I remember back in the days, producers used to complain about the difficulty of creating games on the PS2 architecture. Perhaps this new freedom will give programmers the ease to produce better games?

Efficient code = Hi-Spec hardware unnecessary...
 
Re: Nintendo revolution

Revolution ?

4d7715ab-63f3-4f19-bf37-f51713ecee6f.jpg
 
Re: Nintendo revolution

why is the system black, wouldnt it should be purple or blue ??
 
Re: Nintendo revolution

I'm confused. I thought the Revolution was that little black box.
 
Re: Nintendo revolution

RuneEdge said:
I'm confused. I thought the Revolution was that little black box.

Thats a prototype!

Its gonna be SMALLER!
 
Re: Nintendo revolution

so for christs sake anybody could please clear me up wtf this is?

a virtual helmet system, a stupid fan fake, a new console, just you guys being bored posting infos ab out a bullshit new nintento system or WHAT?i gotta admit i haven't heard anything about "revolution" the last years.sorry! ;) :lol:
 
Re: Nintendo revolution

I'm not a Nintendo Fan, i'm more Sony and Winning Eleven :D
i have a feeling about the virtual helmet and why called this console Revolution ? for a controller :lol:

PS: strange IGN (work with Nintendo and see the hidden conference by NIntendo) have add a news about a Dolby Headphones (Old technology, nothing news believe me i'm a fan of a home cinema) Dolby headphones like a """fake""" video...
http://psp.ign.com/articles/617/617679p1.html
 
Last edited:
Re: Nintendo revolution

Thomas, you'll posting anything here won't you? :lol:
Just kidding with ya. Good work.
 
Re: Nintendo revolution

lumlum said:
Thomas, you'll posting anything here won't you? :lol:
Just kidding with ya. Good work.

Lumlum you want the real revolution ? :D
BIELEVE ME ! ALL PEOPLE ON THIS FORUM CAN'T WAIT THE NINTENDO REVOLUTION AFTER SEE MY VIDEO ! :lmao:

PS: is not a joke

maybe i post tomorrow.
 
Re: Nintendo revolution

Looking forward 2 the new controller and WE on ALL consoles!

Sony are losing all exclusives!

Will get all 3 if they are reasonable
 
Re: Nintendo revolution

Oh. My. God. Is that seriously it? Jesus wept.

Welcome to the death of Nintendo.
 
Re: Nintendo revolution

That has to be the worst thing I've ever seen.

I mean the idea's are great and everything, but the design of that thing is awful. I think it pretty much rules out playing WE/ PES on the Revolution anyway. :)
 
Back
Top Bottom