I just looked on ebuyer; out of 151 laptops only 19 have more then 4gb, and for desktops it is 19 out of 126. On the Steam Hardware Survey, only 7.99% of users have 5gb or more. It’ll still be more then 5 years before most people have 16gb or more.
I’d also bet that high-speed SSD’s (not all SSDs are fast such as those in the early eee PCs), are much less common then having more then 4gb of ram.
I have a SSD, maybe not worth the price, but there is benefit.
But more than 4 GB RAM ? when would you ever need that ?
Maybe if you’re playing a new game, while doing 3d stuff in maya, while having photoshop open, while watching a HD vid, in a 4 screen setup…on windows vista
I only recently got 4 because I had 3x 1GB Ram, and one broke, so I got a 2GB
I never maxed out 3GB even with 3 screens - work, entertainment, browser
not even with video editing, 3d stuff, or photoshop ( like I said unless you’re doing everything at once maybe)
Um nearly always…
I’m running OS X 10.7 (Lion) on a laptop with 8 GB RAM… I have my email open (Mail.app), a Terminal window or two, Finder, iChat, lots of other little system things (Dashboard etc.) and Activity Monitor… current memory usage is: 5.40 GB.
… no swapping though… - much nicer than an SSD and way cheaper
Being the multitalent that I am I usually have Firefox with at least dozen tabs open, I have DbVisualizer open, I have Eclipse open and IntelliJ, and Outlook, and Acrobat reader, and Photoshop, and Inkscape, and running Starcraft 2, and watching a video, textpad, notepad, cmd prompt, putty, IE, etc.
I currently have 4 gigs, and it’s borderline “sufficient”. 8 gigs might be more than sufficient for TODAYs usage. But getting 16 gigs TODAY might provide for some future-proofness, because programs do require increasingly more memory. But honestly, I’d only get 8 gigs today, and another 8 gigs pair in 2 years time.
I don’t really think in terms of necessary or unnecessary bloat any more - just whether something works properly. Or I wouldn’t be such a fan of Java, which is massively overengineered for 90% of the time versus a simple C program equivalent! I mean, using 8mb of RAM to print out Hello World? Crazy! But convenient. So it’s cheaper for us all to buy 4gb of RAM than it is to have all programmers spending twice the development time and hence increasing the costs of all software produced.
I’ve got 6gb in my Vista64 box and usually use up at least 4 by the time I get to a working desktop!
With regards to ram usage.
I honestly thought in Vista and Windows 7 they changed the way ram is allocated/reserved.
Prior to Vista, it was allocated on what was being used by the program, but in vista and windows 7. It preallocates ram so that if it “might” need it, it has a reserved spot, however when other programs start saying I need more ram, it starts to take ram away from a program that appears to be using more, when in reality its not actually utilizing it.
i.e. XP says oh your using 500megs of your 4 gigs, vista (although running mostly same stuff, might say your using 2 gigs of your 4) but in reality, its still only using 500 megs, but just has temporarily appeared to given more ram to programs “just incase” ? or some random bullshit like that.
Now clearly, sometimes this isn’t the case, especially browsing with a ton of applets/flash, and 50 tabs of pages open.
I remember when vista first came out that people were freaking out about high ram usage and very few people ever realized it was because the change in pre allocation. (which can be modified somewhere/somehow)
I still think 4 gigs is still plenty for most everyone outside of the world of photo/video editing and 3d graphics.(though I do personally have either 4, 6 or 8 in my computers. Check out the “performance comparative tests” where they run the same system/apps with 2, 3, 4, 6, 8 gigs and see the minimal to non existent changes in performance"
This is what I thought, I could be totally off base if someone could prove me wrong. I’ll gladly be open to counter arguments
For a new potencially memory heavy rendering tech it’s the max memory capability of current hardware that matters, not what people actually have installed.
Write a revolutionary game that needs 8GB minimum and people will happily go out and buy 8GB (~£50).
It’s when the potencial customer has to upgrade their motherboard, processor, and operating system too that you run into problems. (£300+)
Tbh though this is all completely irrelevant; Unlimited Detail tech is still experimental so we’re still atleast 2 years from seeing a game built upon it.
I wonder if it’ll be sufficiently mature in time to shape the hardware specifications of the next gen. consoles.
Personally I’d love it not to be, as we’d see PC titles quickly eclipse their console equivalents.
Level of distance…? >:( I’m pretty sure it’s Level of Detail. >___>
Okay, so they seem to be able to solve most of the problems they’ve been criticized for. They can apparently do some animations, though that wasn’t overly convincing. Memory usage was just “That’s not a problem.” though… I’m gonna assume that they have solved or will solve many of these problems. Over a few years graphics performance improved several thousand times with parallel processing graphics cards. This would be the same increase if you consider the time span of their work. This isn’t coming out of the blue, they’ve been working on it for years.
I would like to ask a question about the actual rendering. If they as they say only need one atom for one pixel, wouldn’t that introduce HORRIBLE shimmering? Sure, you can render stuff miles away, but that would be like not using mipmaps on an extremely large texture. Huge aliasing problems and huge shimmering problems. I don’t see how they can solve that. It’s not like you can do multisampling here too, as you’re almost guaranteed to have different atoms in every sample. The only solution would be supersampling (performance) or GOAA (Glasses-off antialiasing, also known as MLAA/FXAA to the ignorant masses). However it doesn’t look like they have a problem with this in the video, but it’s hard to see…
Agreed, if they are grabbing one atom then you would have terrible shimmering. But clearly they have the problem solved. They are either lying, and are grabbing multiple atoms, or they could be using some sort of tree to represent the data and they go deeper for more detail. Essentially each layer is a mipmap.
I am much more impressed after seeing it actually running, but they still need a laptop with 8gb of ram in order to run what is just a tech demo (less resource intensive them a typical game). Again, given that only a tiny proportion of PC users have that much ram, and that only a tiny proportion of PC’s on sale today have that much ram, I still don’t see how this can be usable any time soon.
However given that a triple-A game could easily take 5 years to be built, and UD will be out in at least another year, we could be seeing 6 or 7 years before we actually get to play a full game using this technology. By then we’ll certainly have more then 8gb of ram, but I can imagine this would still be pushing the upper limit of memory usage.
A few things I would like to bring out. The most I was impressed with was the fact that he claimed running the demo in software mode. If that’s true, then I’ve yet to see something in software run as smooth as that on my PC with those details and I can’t say that mine is too old. Even the screenshots of minecraft are lagging my machine (or maybe I’m over exaggerating with that last one )
Also saying that the laptop needs 8GB is a speculation. He took out the network cable but he could have used wireless to connect to the Chinese server farm and download the jpegs.
Though I would have been interested in the system usage. Running on win, a task manager window would have been nice.
Don’t fall for it. Did you not notice the frequent video cuts, staged questions, and the two bumbling idiots looking at their card/paper notes because they can’t remember their own fake interview script?
I think that it is hilarious that they show a raytracing example and generalize and say that the result is ugly.
They also demonstrate tesselation with a low-res heightmap.
Not a word about procedural techniques for geometry and textures which should be an obvious comparison. Take for example the raytracer POV-Ray (and lots of other) which uses implicit descriptions of environments and get an “infinite” detail by just using more rays. The “search algorithm” that they use can’t be that extremely different from a raytracer right?
The most impressive part is the update speed (unless they fake it) and the antialising. Even with everything being axis-aligned in grid, this surely is impressive.
This “interview” just make me even more skeptical than after the previous video.