I'll take this necro even further back and ask why 7th generation games didn't look that much better than 6th generation games.
Looking at a game like Super Mario Galaxy and realizing that it's essentially running on 2001 hardware, it's clear that a little bit of extra polish in developing goes a long way (texturing the shadow detail of his fingers rather than rendering them with real-Tim lighting, for example). A lot of the leap was from a shift in game design (larger teams, bigger budgets, etc). Nintendo openly spoke of graphics reaching the point where any game could be made on any system with varying levels of graphical fidelity and, thus, performance was no longer relevant to whether a particular game could be made for a particular system. Yes, you could water-down Modern Warfare to play on a GameCube. You couldn't, however, make a touch screen or Wii Remote pointer game on a 360 or PS3 as they existed at launch without huge compromises, which is why they chose the path they chose. Obviously, they chose right because the Wii was easily more successfully than anything they've ever done and blew away the superior competition, which ultimately tried to imitate their strategy (Move, Kinect, etc).
Heck, most players didn't even realize the those Burger King games from 2005 were actually original XBOX games in XBOX 360-style packaging. Many more who noticed the graphical difference chalked it up to them being almost-free advergames rather than games from the previous generation.
True, a lot of this may have to do with the fact that newer hardware tends to start slow out of the gate, and doesn't really start pushing limits until about mid life-cycle. It seems to have always been this way. I had no idea that the NES was capable of pushing games like TMNT 2 until well into its life cycle, I didn't realize the SNES was capable of Donkey Kong Country until the SNES had been around for a while, etc. It's always easy in the early-going to say "this is barely better than the last generation", and it gets easier with each generation. You look at Uncharted 3 (PS3), and compare it to a PS4 launch title like Knack or something, and you might say, "uhhh, this actually looks like a step backwards". Then 2 or 3 years later Uncharted 4 comes out and you go "Oh, now this is much more like it". You just usually don't get that moment on the first day, and sometimes not even in the first year.
I was under the impression that bringing back older threads was okay as long as the post was relevant (I mean we are about to have our first 9th (or at least 8 1/2 gen) console so I thought it'd be a good idea to post an update on my thoughts on it)) and not just a generic "bump" or "ttt" sort of deal.
I was under the impression that bringing back older threads was okay as long as the post was relevant (I mean we are about to have our first 9th (or at least 8 1/2 gen) console so I thought it'd be a good idea to post an update on my thoughts on it) and not just a generic "bump" or "ttt" sort of deal.
It is totally okay, especially when the post is relevant. I was just saying "holy cow, this is an old thread!", not "Shame on you for bumping this!". I probably would have lol'ed if you just dropped a random "ttt" to a 3 year-old thread.
Lol you call him out for necroposting and yet you quoted someone, who also happens to be banned for a year now
"Call him out?" I realized after I posted that all the old replies except the last one were from years ago. I fail to see the "yet" here, which eludes that I've somehow hypocritically done something to do with necro-bumping to a thread that was on the first page.
Dude relax, i was just playing with you. It was funny because you were ragging on Estil for necroposting, but then you did something equally as forum funny (to me at least) as quoting a member who has a giant BANNED avatar like they are gonna see what you wrote
Regardless of whether you knew the thread was a necro or not, quoting a long time banned member to counter argue him is funny.
I'll take this necro even further back and ask why 7th generation games didn't look that much better than 6th generation games.
Looking at a game like Super Mario Galaxy and realizing that it's essentially running on 2001 hardware, it's clear that a little bit of extra polish in developing goes a long way (texturing the shadow detail of his fingers rather than rendering them with real-Tim lighting, for example). A lot of the leap was from a shift in game design (larger teams, bigger budgets, etc). Nintendo openly spoke of graphics reaching the point where any game could be made on any system with varying levels of graphical fidelity and, thus, performance was no longer relevant to whether a particular game could be made for a particular system. Yes, you could water-down Modern Warfare to play on a GameCube. You couldn't, however, make a touch screen or Wii Remote pointer game on a 360 or PS3 as they existed at launch without huge compromises, which is why they chose the path they chose. Obviously, they chose right because the Wii was easily more successfully than anything they've ever done and blew away the superior competition, which ultimately tried to imitate their strategy (Move, Kinect, etc).
Heck, most players didn't even realize the those Burger King games from 2005 were actually original XBOX games in XBOX 360-style packaging. Many more who noticed the graphical difference chalked it up to them being almost-free advergames rather than games from the previous generation.
True, a lot of this may have to do with the fact that newer hardware tends to start slow out of the gate, and doesn't really start pushing limits until about mid life-cycle. It seems to have always been this way. I had no idea that the NES was capable of pushing games like TMNT 2 until well into its life cycle, I didn't realize the SNES was capable of Donkey Kong Country until the SNES had been around for a while, etc. It's always easy in the early-going to say "this is barely better than the last generation", and it gets easier with each generation. You look at Uncharted 3 (PS3), and compare it to a PS4 launch title like Knack or something, and you might say, "uhhh, this actually looks like a step backwards". Then 2 or 3 years later Uncharted 4 comes out and you go "Oh, now this is much more like it". You just usually don't get that moment on the first day, and sometimes not even in the first year.
I think it's a combination of that and what extra development effort brings (beyond simple familiarity with the platform). When Max Payne was released people praised the multimedia presentation with static comic book panels and voice-overs, but even during the PS2's life the standard increased to where people expected a team of animators and voice actors with fully-animated cutscenes for an A-list title. Now they're expected to be fully animated by a team of people who might as well be working on CG movies!
Compare Super Mario Sunshine and Super Mario Galaxy on essentially the same hardware: we get a fully orchestrated soundtrack and cinematics of a completely different caliber in Galaxy where we got a mostly real-time generated soundtrack and and some noisy text boxes in Sunshine. The difference was neither hardware capabilities nor familiarity with the hardware: it was scope of development and development budget. You couldn't always justify an orchestral score for a game in 2001 but it was somewhat expected for A-list titles after 2007. Heck, it seems like Hans Zimmer composes as many games as movies these days.
You might say that the disc capacity limitation enabled Galaxy's orchestral soundtrack, but that's not the only example: take a look at The Legend of Zelda The Wind Waker and The Legend of Zelda Twilight Princess on the same platform (GameCube). They're even the same game engine! The higher expectations of newer A-list game development is why TP got an orchestral soundtrack and WW didn't. This was obvious when reviews still panned the game for being antiquated with silent characters and text-only interaction (few would have made that complaint in 2002 for Wind Waker). Despite the expanded scope of its development it still did not live up to the expected scope.
While more polish and more performance on the same hardware does mean more time/effort/skill, it has become the norm for developmental scope to increase regardless of the platform's performance. The performance boost that comes from increased familiarity with a particular platform is likely secondary to this. How much of Super Mario World 2: Yoshi's Island was due to the increased performance of the FX2 chip and developmental improvements and how much was simply due to the extended development cycle allowing for more polish? That game clearly benefited from all of the above!
I'll take this necro even further back and ask why 7th generation games didn't look that much better than 6th generation games.
Looking at a game like Super Mario Galaxy and realizing that it's essentially running on 2001 hardware, it's clear that a little bit of extra polish in developing goes a long way (texturing the shadow detail of his fingers rather than rendering them with real-Tim lighting, for example). A lot of the leap was from a shift in game design (larger teams, bigger budgets, etc). Nintendo openly spoke of graphics reaching the point where any game could be made on any system with varying levels of graphical fidelity and, thus, performance was no longer relevant to whether a particular game could be made for a particular system. Yes, you could water-down Modern Warfare to play on a GameCube. You couldn't, however, make a touch screen or Wii Remote pointer game on a 360 or PS3 as they existed at launch without huge compromises, which is why they chose the path they chose. Obviously, they chose right because the Wii was easily more successfully than anything they've ever done and blew away the superior competition, which ultimately tried to imitate their strategy (Move, Kinect, etc).
Heck, most players didn't even realize the those Burger King games from 2005 were actually original XBOX games in XBOX 360-style packaging. Many more who noticed the graphical difference chalked it up to them being almost-free advergames rather than games from the previous generation.
True, a lot of this may have to do with the fact that newer hardware tends to start slow out of the gate, and doesn't really start pushing limits until about mid life-cycle. It seems to have always been this way. I had no idea that the NES was capable of pushing games like TMNT 2 until well into its life cycle, I didn't realize the SNES was capable of Donkey Kong Country until the SNES had been around for a while, etc. It's always easy in the early-going to say "this is barely better than the last generation", and it gets easier with each generation. You look at Uncharted 3 (PS3), and compare it to a PS4 launch title like Knack or something, and you might say, "uhhh, this actually looks like a step backwards". Then 2 or 3 years later Uncharted 4 comes out and you go "Oh, now this is much more like it". You just usually don't get that moment on the first day, and sometimes not even in the first year.
I think it's a combination of that and what extra development effort brings (beyond simple familiarity with the platform). When Max Payne was released people praised the multimedia presentation with static comic book panels and voice-overs, but even during the PS2's life the standard increased to where people expected a team of animators and voice actors with fully-animated cutscenes for an A-list title. Now they're expected to be fully animated by a team of people who might as well be working on CG movies!
Compare Super Mario Sunshine and Super Mario Galaxy on essentially the same hardware: we get a fully orchestrated soundtrack and cinematics of a completely different caliber in Galaxy where we got a mostly real-time generated soundtrack and and some noisy text boxes in Sunshine. The difference was neither hardware capabilities nor familiarity with the hardware: it was scope of development and development budget. You couldn't always justify an orchestral score for a game in 2001 but it was somewhat expected for A-list titles after 2007. Heck, it seems like Hans Zimmer composes as many games as movies these days.
You might say that the disc capacity limitation enabled Galaxy's orchestral soundtrack, but that's not the only example: take a look at The Legend of Zelda The Wind Waker and The Legend of Zelda Twilight Princess on the same platform (GameCube). They're even the same game engine! The higher expectations of newer A-list game development is why TP got an orchestral soundtrack and WW didn't. This was obvious when reviews still panned the game for being antiquated with silent characters and text-only interaction (few would have made that complaint in 2002 for Wind Waker). Despite the expanded scope of its development it still did not live up to the expected scope.
While more polish and more performance on the same hardware does mean more time/effort/skill, it has become the norm for developmental scope to increase regardless of the platform's performance. The performance boost that comes from increased familiarity with a particular platform is likely secondary to this. How much of Super Mario World 2: Yoshi's Island was due to the increased performance of the FX2 chip and developmental improvements and how much was simply due to the extended development cycle allowing for more polish? That game clearly benefited from all of the above!
Very true, and those among us who have any sort of appreciation for the process of developing a game can recognize that. It just really blows me away how much potential of a system can be unlocked over the course of time as developers become familiar with the hardware they're developing for. Growing up I thought many times, "holy shit, I didn't realize my NES/SNES/N64/PS1 etc was capable of pushing games like this!". Nowadays I have to remind myself of that whenever a new piece of hardware comes out and doesn't knock my socks off immediately with launch titles. I realize it takes time to get a rhythm going with a system, and now I feel more comfortable dropping cash on a new system.
To another point in this thread, I do feel like we're getting into an age where silicon and chips don't play much of a factor anymore, and the limits of how insane a game can get will really only be governed by how much time and work a developer is willing to put into their game.
I realized the plateau of diminishing returns on power when Super Smash Bros. Brawl launched in 2008 and looked worse than Melee 7 years earlier. I looked around and realized console games were getting worse as a whole. I am exclusively "retro" now.
I'll say, Brawl doesn't even let you zoom in to the trophies! It was neat to do that in Melee and it was great for showing off how well the GC could do graphics! Did I forget to mention it came out at launch or soon after?
I think game graphics will be stalled in the uncanny valley for quite a while to come. I just watched the latest Kojima trailer, and as impressive as that was, it still looked like dead-eyed mannequins
At first i would have agreed with the statement but lately i have been going back to Xbox 360 and PS3 and the graphics are now so glaringly different to me. not only the graphics the scenery seems fuller with more models/npcs that would have been left out on the previous consoles.
I think game graphics will be stalled in the uncanny valley for quite a while to come. I just watched the latest Kojima trailer, and as impressive as that was, it still looked like dead-eyed mannequins
Uncanny valley is something Nintendo understands very well, but 3rd party developers don't. Nintendo games have bright, colorful, flamboyant worlds that couldn't exist in real life. It's not the drab greens, browns, and grays of modern FPS style titles. Look at Mario. He is twice as tall as he is wide, basically proportioned like a toddler. Throw in a bulbous nose and bushy mustache, and a jump that equals four times his height. His cartoon like qualities trigger the inner child of the brain releasing endorphins, and you get similar effects with Dreamworks and Pixar movies.
Instead AAA third party developers often try to go for lifelike realism, and it breaks down from there. Mannequins with chunky movements. There's a reason why voice actors often act out their character's scenes wearing motion capture, even for CGI movies, to get the animation right. Sometimes in video games, your character even keep running full speed when you hit an invisible wall. That's just lazy 6th gen 2001 style development. Run into a wall in any 3D Mario game, from 64 to 3D World, and he doesn't do that.
All the processing power in the world won't bridge the gap across the uncanny valley without serious development efforts. But big budget games often cost more to produce than movies, and with razor thin profit margins, one big flop can bankrupt a studio. Developers have to learn the art of doing more with less. Design limitations stimulate creativity, which is severely lacking in many modern games. Mobile games for instance, as well as some indie titles (some indies are great but many are abysmal) just pale in comparison to bitd games where devs had to work around practical restrictions. No surprise that some of the best designed indie games from a gameplay perspective capitalize on this fact by building engines off of chunky looking pixels despite playing on HD capable screens.
^^^ The way through the uncanny valley issue is, at least in part, to do what big budget movies do for lifelike CGI faces, where they super-imposes real faces and eyes. (i.e. Gollum, King Kong, probably Jungle Book, etc).
Good motion capture technology has been around for a long time, as well.
BUT, that stuff all comes at a hefty price.
Studio movies can afford to spend $200MM in production, since they'll gross $1B+ worldwide with a blockbuster.
Even hugely successful games usually earn a fraction of that (and with a less favorable return for the studio, in terms of portion of gross that makes it all the way back).
That might as well have been a movie for as much promo they did for it.
That also might be what started this topic; when they released the trailer for GTA V on PS3, it looked graphically better than anything that was out for PS4 at the time.
I think it depends on the persons eyes honestly. If you have bad eyesight, it's probably going to effect how you see games, I mean other then literally. Like for me, I'm near sighted and both of my parents have pretty poor eyesight, so bad eyes run in the family (a lot of relatives wear glasses and have since a young age). I can't tell the difference between blu ray and dvd. It all looks the same to me. Maybe the BR is slightly less fuzzy but...that doesn't justify the price for me, so I still buy dvd.
Going back to games, Fallout 4 has gotten slammed for it's graphics. People say they're not that great...has to be one of the best looking games I've seen for years. It's beautiful. But all games (save for stuff like Fallout and Dishonored II) right now are looking the same to me. I do think we've reached the point where we can't go any farther into graphical quality.
Next step would be 3D and VR graphics. Seeing stuff with the naked eye can't get any better...we need more advanced items or more advanced eyes. Maybe that's the next big step? Robotic eyes?
I wish everyone would stop trying to push graphic limits and keep a damn nice framerate. I see badass looking games like MGSV running at 60fps, then I look at the latest Zelda trailer (which looks like a nice Wii game) struggling to keep up at 30fps. No matter how technically out-of-date F-Zero GX or Xbox Ninja Gaiden becomes that silky smooth framerate makes them a pleasure to look at.
I think graphics have pretty much hit it's peak. There's not much room for improvement anymore. I think developers need to focus on making games run at 60FPS again like they did during the 6th gen.
^^^ The way through the uncanny valley issue is, at least in part, to do what big budget movies do for lifelike CGI faces, where they super-imposes real faces and eyes. (i.e. Gollum, King Kong, probably Jungle Book, etc).
Good motion capture technology has been around for a long time, as well.
BUT, that stuff all comes at a hefty price.
Studio movies can afford to spend $200MM in production, since they'll gross $1B+ worldwide with a blockbuster.
Even hugely successful games usually earn a fraction of that (and with a less favorable return for the studio, in terms of portion of gross that makes it all the way back).
The primary reason why Gollum is so believable is because his actor wore a motion capture suit and the artists literally captured every movement from broad, sweeping motions to the slightest twitch of facial muscles. It was the first time an actor won an award for CGI character, and totally deserved. If you watch the bonus DVD features from the LOTR boxed set, they even show raw footage of him flopping around in a fridged creek.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
Most of the time when a character stands for instance, he/she will periodically shift their weight from one hip to the other, causing one knee to be slightly bent and one hip slightly lower than the other. I took some art classes in college and one thing we learned was that poses are never truly symmetric, unless you are a soldier standing at attention.
Then there's the effect of gravity on parts of the body, for instance a woman's chest will flatten in a reclined pose as opposed to standing. And I'm not even going to go into the exaggerated proportions and "giggle physics" some animators employ. Hair is extremely dynamic, but most CGI video game characters keep it short or mostly conformed to the body. Particle engines can simulate hair movement in CGI movies, but you have all sorts of collision effects and intersections with the body that have to be hand manipulated for each frame to be believable. In a game, such tweaks would be impossible so it's much easier to just cut it short and ignore gravity.
Also hair is one of the most expensive aspects of rendering a character because you've got thousands of points that kind of move together but each bend to their will. Simulating a single frame of gorgeous wavy may take minutes to render at 1920x1080 even on the fastest workstations, hardly ideal for 1/60 framerate. As a result, hair doesn't use particle effects at all but rather tends to consist of sculpted poly shapes with textures. They run these sculpted forms through the same "giggle engine" they use for cleavage and accessories so that pony tails and things tend to float as the character bobs up and down, making the physics only partly believable.
The uncanny valley is a long trough to cross, and again, most pure CGI movies (those without live action elements) use stylized characters that remain on the high mound before crossing the ravine. You can't give the Gollem treatment to every single character, environment, and prop in an animated fim (much less a video game that renders on the fly). That would likely cost billions and take many years of work to produce such a film, that would be extremely lucky if it made 200 million at the box office.
Even in live action flicks, they blend as much practical effects as they can with CGI and postwork to make it believable. You simply cannot do that in video games. Animated violence or sexualization is not without precedenet, so there's no reason why main characters in a Teen or MA game can't be modeled with cartoon like stylization like typical Pixar/Dreamworks movies but in a "mature" fashion. Many fantasy RPGs in fact do this, avoiding the uncanny valley entirely. But more and more frequently, developers ignore the "bridge out" sign and repeatedly dive headfirst into this valley, and gamers must just accept it.
That said, it will be neat to see where Zelda BOTW goes with regards to artistic direction. I expect nothing less than being perched upon that high mound immediately before the ravine.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
They may do motion capture for the cenematics but not the actual gameplay.
Well that answers my other question regarding if there is still a big difference b/t what state of the art equipment that only studios, TV stations and such can afford can do for computer graphics versus a home console. Because I was quite impressed what the difference was like back in the 80s/90s!
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
They may do motion capture for the cenematics but not the actual gameplay.
Motion capture is used to recod gameplay movement. I remember watching something about Andy Serkis donning the motion capture suit to record ledge jumping/hanging animations for the video game Enslaved.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
They may do motion capture for the cenematics but not the actual gameplay.
Motion capture is used to recod gameplay movement. I remember watching something about Andy Serkis donning the motion capture suit to record ledge jumping/hanging animations for the video game Enslaved.
He also did it for Heavenly Sword. His involvement was heavily marketed long before the game was finished. Not his fault, but it ended up just being an inferior God of War clone that relied WAY too much on cinematic elements to impress you into thinking the game was good.
Seeing what Pixar has been capable of with no photographic textures or motion capture AT ALL (they see it as lazy and inartistic), I get more excited by talented animation that doesn't use these shortcuts.
I remember watching stuff about the mocap used in Wayne Gretzky's 3D Hockey before the N64 was even released, so it goes back a lot farther.
Seeing what Pixar has been capable of with no photographic textures or motion capture AT ALL (they see it as lazy and inartistic), I get more excited by talented animation that doesn't use these shortcuts.
Fun fact, games of today look better than Toy Story...
Comments
I'll take this necro even further back and ask why 7th generation games didn't look that much better than 6th generation games.
Looking at a game like Super Mario Galaxy and realizing that it's essentially running on 2001 hardware, it's clear that a little bit of extra polish in developing goes a long way (texturing the shadow detail of his fingers rather than rendering them with real-Tim lighting, for example). A lot of the leap was from a shift in game design (larger teams, bigger budgets, etc). Nintendo openly spoke of graphics reaching the point where any game could be made on any system with varying levels of graphical fidelity and, thus, performance was no longer relevant to whether a particular game could be made for a particular system. Yes, you could water-down Modern Warfare to play on a GameCube. You couldn't, however, make a touch screen or Wii Remote pointer game on a 360 or PS3 as they existed at launch without huge compromises, which is why they chose the path they chose. Obviously, they chose right because the Wii was easily more successfully than anything they've ever done and blew away the superior competition, which ultimately tried to imitate their strategy (Move, Kinect, etc).
Heck, most players didn't even realize the those Burger King games from 2005 were actually original XBOX games in XBOX 360-style packaging. Many more who noticed the graphical difference chalked it up to them being almost-free advergames rather than games from the previous generation.
True, a lot of this may have to do with the fact that newer hardware tends to start slow out of the gate, and doesn't really start pushing limits until about mid life-cycle. It seems to have always been this way. I had no idea that the NES was capable of pushing games like TMNT 2 until well into its life cycle, I didn't realize the SNES was capable of Donkey Kong Country until the SNES had been around for a while, etc. It's always easy in the early-going to say "this is barely better than the last generation", and it gets easier with each generation. You look at Uncharted 3 (PS3), and compare it to a PS4 launch title like Knack or something, and you might say, "uhhh, this actually looks like a step backwards". Then 2 or 3 years later Uncharted 4 comes out and you go "Oh, now this is much more like it". You just usually don't get that moment on the first day, and sometimes not even in the first year.
I was under the impression that bringing back older threads was okay as long as the post was relevant (I mean we are about to have our first 9th (or at least 8 1/2 gen) console so I thought it'd be a good idea to post an update on my thoughts on it) and not just a generic "bump" or "ttt" sort of deal.
It is totally okay, especially when the post is relevant. I was just saying "holy cow, this is an old thread!", not "Shame on you for bumping this!". I probably would have lol'ed if you just dropped a random "ttt" to a 3 year-old thread.
PS: Bob forgot to ask if her cat was spayed or neutered! Mine is neutered Bob, just for you.
Lol you call him out for necroposting and yet you quoted someone, who also happens to be banned for a year now
"Call him out?" I realized after I posted that all the old replies except the last one were from years ago. I fail to see the "yet" here, which eludes that I've somehow hypocritically done something to do with necro-bumping to a thread that was on the first page.
Dude relax, i was just playing with you. It was funny because you were ragging on Estil for necroposting, but then you did something equally as forum funny (to me at least) as quoting a member who has a giant BANNED avatar like they are gonna see what you wrote
Regardless of whether you knew the thread was a necro or not, quoting a long time banned member to counter argue him is funny.
What is there to edit in posts like these.
?
I'll take this necro even further back and ask why 7th generation games didn't look that much better than 6th generation games.
Looking at a game like Super Mario Galaxy and realizing that it's essentially running on 2001 hardware, it's clear that a little bit of extra polish in developing goes a long way (texturing the shadow detail of his fingers rather than rendering them with real-Tim lighting, for example). A lot of the leap was from a shift in game design (larger teams, bigger budgets, etc). Nintendo openly spoke of graphics reaching the point where any game could be made on any system with varying levels of graphical fidelity and, thus, performance was no longer relevant to whether a particular game could be made for a particular system. Yes, you could water-down Modern Warfare to play on a GameCube. You couldn't, however, make a touch screen or Wii Remote pointer game on a 360 or PS3 as they existed at launch without huge compromises, which is why they chose the path they chose. Obviously, they chose right because the Wii was easily more successfully than anything they've ever done and blew away the superior competition, which ultimately tried to imitate their strategy (Move, Kinect, etc).
Heck, most players didn't even realize the those Burger King games from 2005 were actually original XBOX games in XBOX 360-style packaging. Many more who noticed the graphical difference chalked it up to them being almost-free advergames rather than games from the previous generation.
True, a lot of this may have to do with the fact that newer hardware tends to start slow out of the gate, and doesn't really start pushing limits until about mid life-cycle. It seems to have always been this way. I had no idea that the NES was capable of pushing games like TMNT 2 until well into its life cycle, I didn't realize the SNES was capable of Donkey Kong Country until the SNES had been around for a while, etc. It's always easy in the early-going to say "this is barely better than the last generation", and it gets easier with each generation. You look at Uncharted 3 (PS3), and compare it to a PS4 launch title like Knack or something, and you might say, "uhhh, this actually looks like a step backwards". Then 2 or 3 years later Uncharted 4 comes out and you go "Oh, now this is much more like it". You just usually don't get that moment on the first day, and sometimes not even in the first year.
I think it's a combination of that and what extra development effort brings (beyond simple familiarity with the platform). When Max Payne was released people praised the multimedia presentation with static comic book panels and voice-overs, but even during the PS2's life the standard increased to where people expected a team of animators and voice actors with fully-animated cutscenes for an A-list title. Now they're expected to be fully animated by a team of people who might as well be working on CG movies!
Compare Super Mario Sunshine and Super Mario Galaxy on essentially the same hardware: we get a fully orchestrated soundtrack and cinematics of a completely different caliber in Galaxy where we got a mostly real-time generated soundtrack and and some noisy text boxes in Sunshine. The difference was neither hardware capabilities nor familiarity with the hardware: it was scope of development and development budget. You couldn't always justify an orchestral score for a game in 2001 but it was somewhat expected for A-list titles after 2007. Heck, it seems like Hans Zimmer composes as many games as movies these days.
You might say that the disc capacity limitation enabled Galaxy's orchestral soundtrack, but that's not the only example: take a look at The Legend of Zelda The Wind Waker and The Legend of Zelda Twilight Princess on the same platform (GameCube). They're even the same game engine! The higher expectations of newer A-list game development is why TP got an orchestral soundtrack and WW didn't. This was obvious when reviews still panned the game for being antiquated with silent characters and text-only interaction (few would have made that complaint in 2002 for Wind Waker). Despite the expanded scope of its development it still did not live up to the expected scope.
While more polish and more performance on the same hardware does mean more time/effort/skill, it has become the norm for developmental scope to increase regardless of the platform's performance. The performance boost that comes from increased familiarity with a particular platform is likely secondary to this. How much of Super Mario World 2: Yoshi's Island was due to the increased performance of the FX2 chip and developmental improvements and how much was simply due to the extended development cycle allowing for more polish? That game clearly benefited from all of the above!
I'll take this necro even further back and ask why 7th generation games didn't look that much better than 6th generation games.
Looking at a game like Super Mario Galaxy and realizing that it's essentially running on 2001 hardware, it's clear that a little bit of extra polish in developing goes a long way (texturing the shadow detail of his fingers rather than rendering them with real-Tim lighting, for example). A lot of the leap was from a shift in game design (larger teams, bigger budgets, etc). Nintendo openly spoke of graphics reaching the point where any game could be made on any system with varying levels of graphical fidelity and, thus, performance was no longer relevant to whether a particular game could be made for a particular system. Yes, you could water-down Modern Warfare to play on a GameCube. You couldn't, however, make a touch screen or Wii Remote pointer game on a 360 or PS3 as they existed at launch without huge compromises, which is why they chose the path they chose. Obviously, they chose right because the Wii was easily more successfully than anything they've ever done and blew away the superior competition, which ultimately tried to imitate their strategy (Move, Kinect, etc).
Heck, most players didn't even realize the those Burger King games from 2005 were actually original XBOX games in XBOX 360-style packaging. Many more who noticed the graphical difference chalked it up to them being almost-free advergames rather than games from the previous generation.
True, a lot of this may have to do with the fact that newer hardware tends to start slow out of the gate, and doesn't really start pushing limits until about mid life-cycle. It seems to have always been this way. I had no idea that the NES was capable of pushing games like TMNT 2 until well into its life cycle, I didn't realize the SNES was capable of Donkey Kong Country until the SNES had been around for a while, etc. It's always easy in the early-going to say "this is barely better than the last generation", and it gets easier with each generation. You look at Uncharted 3 (PS3), and compare it to a PS4 launch title like Knack or something, and you might say, "uhhh, this actually looks like a step backwards". Then 2 or 3 years later Uncharted 4 comes out and you go "Oh, now this is much more like it". You just usually don't get that moment on the first day, and sometimes not even in the first year.
I think it's a combination of that and what extra development effort brings (beyond simple familiarity with the platform). When Max Payne was released people praised the multimedia presentation with static comic book panels and voice-overs, but even during the PS2's life the standard increased to where people expected a team of animators and voice actors with fully-animated cutscenes for an A-list title. Now they're expected to be fully animated by a team of people who might as well be working on CG movies!
Compare Super Mario Sunshine and Super Mario Galaxy on essentially the same hardware: we get a fully orchestrated soundtrack and cinematics of a completely different caliber in Galaxy where we got a mostly real-time generated soundtrack and and some noisy text boxes in Sunshine. The difference was neither hardware capabilities nor familiarity with the hardware: it was scope of development and development budget. You couldn't always justify an orchestral score for a game in 2001 but it was somewhat expected for A-list titles after 2007. Heck, it seems like Hans Zimmer composes as many games as movies these days.
You might say that the disc capacity limitation enabled Galaxy's orchestral soundtrack, but that's not the only example: take a look at The Legend of Zelda The Wind Waker and The Legend of Zelda Twilight Princess on the same platform (GameCube). They're even the same game engine! The higher expectations of newer A-list game development is why TP got an orchestral soundtrack and WW didn't. This was obvious when reviews still panned the game for being antiquated with silent characters and text-only interaction (few would have made that complaint in 2002 for Wind Waker). Despite the expanded scope of its development it still did not live up to the expected scope.
While more polish and more performance on the same hardware does mean more time/effort/skill, it has become the norm for developmental scope to increase regardless of the platform's performance. The performance boost that comes from increased familiarity with a particular platform is likely secondary to this. How much of Super Mario World 2: Yoshi's Island was due to the increased performance of the FX2 chip and developmental improvements and how much was simply due to the extended development cycle allowing for more polish? That game clearly benefited from all of the above!
Very true, and those among us who have any sort of appreciation for the process of developing a game can recognize that. It just really blows me away how much potential of a system can be unlocked over the course of time as developers become familiar with the hardware they're developing for. Growing up I thought many times, "holy shit, I didn't realize my NES/SNES/N64/PS1 etc was capable of pushing games like this!". Nowadays I have to remind myself of that whenever a new piece of hardware comes out and doesn't knock my socks off immediately with launch titles. I realize it takes time to get a rhythm going with a system, and now I feel more comfortable dropping cash on a new system.
To another point in this thread, I do feel like we're getting into an age where silicon and chips don't play much of a factor anymore, and the limits of how insane a game can get will really only be governed by how much time and work a developer is willing to put into their game.
I think game graphics will be stalled in the uncanny valley for quite a while to come. I just watched the latest Kojima trailer, and as impressive as that was, it still looked like dead-eyed mannequins
Uncanny valley is something Nintendo understands very well, but 3rd party developers don't. Nintendo games have bright, colorful, flamboyant worlds that couldn't exist in real life. It's not the drab greens, browns, and grays of modern FPS style titles. Look at Mario. He is twice as tall as he is wide, basically proportioned like a toddler. Throw in a bulbous nose and bushy mustache, and a jump that equals four times his height. His cartoon like qualities trigger the inner child of the brain releasing endorphins, and you get similar effects with Dreamworks and Pixar movies.
Instead AAA third party developers often try to go for lifelike realism, and it breaks down from there. Mannequins with chunky movements. There's a reason why voice actors often act out their character's scenes wearing motion capture, even for CGI movies, to get the animation right. Sometimes in video games, your character even keep running full speed when you hit an invisible wall. That's just lazy 6th gen 2001 style development. Run into a wall in any 3D Mario game, from 64 to 3D World, and he doesn't do that.
All the processing power in the world won't bridge the gap across the uncanny valley without serious development efforts. But big budget games often cost more to produce than movies, and with razor thin profit margins, one big flop can bankrupt a studio. Developers have to learn the art of doing more with less. Design limitations stimulate creativity, which is severely lacking in many modern games. Mobile games for instance, as well as some indie titles (some indies are great but many are abysmal) just pale in comparison to bitd games where devs had to work around practical restrictions. No surprise that some of the best designed indie games from a gameplay perspective capitalize on this fact by building engines off of chunky looking pixels despite playing on HD capable screens.
Good motion capture technology has been around for a long time, as well.
BUT, that stuff all comes at a hefty price.
Studio movies can afford to spend $200MM in production, since they'll gross $1B+ worldwide with a blockbuster.
Even hugely successful games usually earn a fraction of that (and with a less favorable return for the studio, in terms of portion of gross that makes it all the way back).
Uhhhhhh, look up GTA V Arch.
The exception, not the rule.
But I updated my comment, all the same.
Uhhhhhh, look up GTA V Arch.
The exception, not the rule.
That might as well have been a movie for as much promo they did for it.
That also might be what started this topic; when they released the trailer for GTA V on PS3, it looked graphically better than anything that was out for PS4 at the time.
Going back to games, Fallout 4 has gotten slammed for it's graphics. People say they're not that great...has to be one of the best looking games I've seen for years. It's beautiful. But all games (save for stuff like Fallout and Dishonored II) right now are looking the same to me. I do think we've reached the point where we can't go any farther into graphical quality.
Next step would be 3D and VR graphics. Seeing stuff with the naked eye can't get any better...we need more advanced items or more advanced eyes. Maybe that's the next big step? Robotic eyes?
^^^ The way through the uncanny valley issue is, at least in part, to do what big budget movies do for lifelike CGI faces, where they super-imposes real faces and eyes. (i.e. Gollum, King Kong, probably Jungle Book, etc).
Good motion capture technology has been around for a long time, as well.
BUT, that stuff all comes at a hefty price.
Studio movies can afford to spend $200MM in production, since they'll gross $1B+ worldwide with a blockbuster.
Even hugely successful games usually earn a fraction of that (and with a less favorable return for the studio, in terms of portion of gross that makes it all the way back).
The primary reason why Gollum is so believable is because his actor wore a motion capture suit and the artists literally captured every movement from broad, sweeping motions to the slightest twitch of facial muscles. It was the first time an actor won an award for CGI character, and totally deserved. If you watch the bonus DVD features from the LOTR boxed set, they even show raw footage of him flopping around in a fridged creek.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
Most of the time when a character stands for instance, he/she will periodically shift their weight from one hip to the other, causing one knee to be slightly bent and one hip slightly lower than the other. I took some art classes in college and one thing we learned was that poses are never truly symmetric, unless you are a soldier standing at attention.
Then there's the effect of gravity on parts of the body, for instance a woman's chest will flatten in a reclined pose as opposed to standing. And I'm not even going to go into the exaggerated proportions and "giggle physics" some animators employ. Hair is extremely dynamic, but most CGI video game characters keep it short or mostly conformed to the body. Particle engines can simulate hair movement in CGI movies, but you have all sorts of collision effects and intersections with the body that have to be hand manipulated for each frame to be believable. In a game, such tweaks would be impossible so it's much easier to just cut it short and ignore gravity.
Also hair is one of the most expensive aspects of rendering a character because you've got thousands of points that kind of move together but each bend to their will. Simulating a single frame of gorgeous wavy may take minutes to render at 1920x1080 even on the fastest workstations, hardly ideal for 1/60 framerate. As a result, hair doesn't use particle effects at all but rather tends to consist of sculpted poly shapes with textures. They run these sculpted forms through the same "giggle engine" they use for cleavage and accessories so that pony tails and things tend to float as the character bobs up and down, making the physics only partly believable.
The uncanny valley is a long trough to cross, and again, most pure CGI movies (those without live action elements) use stylized characters that remain on the high mound before crossing the ravine. You can't give the Gollem treatment to every single character, environment, and prop in an animated fim (much less a video game that renders on the fly). That would likely cost billions and take many years of work to produce such a film, that would be extremely lucky if it made 200 million at the box office.
Even in live action flicks, they blend as much practical effects as they can with CGI and postwork to make it believable. You simply cannot do that in video games. Animated violence or sexualization is not without precedenet, so there's no reason why main characters in a Teen or MA game can't be modeled with cartoon like stylization like typical Pixar/Dreamworks movies but in a "mature" fashion. Many fantasy RPGs in fact do this, avoiding the uncanny valley entirely. But more and more frequently, developers ignore the "bridge out" sign and repeatedly dive headfirst into this valley, and gamers must just accept it.
That said, it will be neat to see where Zelda BOTW goes with regards to artistic direction. I expect nothing less than being perched upon that high mound immediately before the ravine.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
They may do motion capture for the cenematics but not the actual gameplay.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
They may do motion capture for the cenematics but not the actual gameplay.
Motion capture is used to recod gameplay movement. I remember watching something about Andy Serkis donning the motion capture suit to record ledge jumping/hanging animations for the video game Enslaved.
But I have yet to seen any evidence of CGI motion capture techniques employed in video games. I think if would be beneficial for some independent to do motion capture of actors performing a bunch of generic acts such as standing, walking, running, driving, opening doors, eating, shooting guns, shooting bows, swinging weapons, lifting, climbing, etc, and build motion capture libraries for development studios to license.
Really? I'm certain I have seen promo videos of studios using those exact techniques for character motion (though i can't recall the specific games involved).
I don't think I've seen that same attention given to face capture in video games, but I wouldn't be surprised if somebody was dabbling in it.
They may do motion capture for the cenematics but not the actual gameplay.
Motion capture is used to recod gameplay movement. I remember watching something about Andy Serkis donning the motion capture suit to record ledge jumping/hanging animations for the video game Enslaved.
He also did it for Heavenly Sword. His involvement was heavily marketed long before the game was finished. Not his fault, but it ended up just being an inferior God of War clone that relied WAY too much on cinematic elements to impress you into thinking the game was good.
Seeing what Pixar has been capable of with no photographic textures or motion capture AT ALL (they see it as lazy and inartistic), I get more excited by talented animation that doesn't use these shortcuts.
I remember watching stuff about the mocap used in Wayne Gretzky's 3D Hockey before the N64 was even released, so it goes back a lot farther.
Seeing what Pixar has been capable of with no photographic textures or motion capture AT ALL (they see it as lazy and inartistic), I get more excited by talented animation that doesn't use these shortcuts.
Fun fact, games of today look better than Toy Story...