Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Nvidia RTX Discussion

Options
1201202204206207209

Comments

  • Registered Users Posts: 7,409 ✭✭✭Icyseanfitz


    The 4090 is the only compelling card from Nvidia this generation, and it's priced at what a titan (halo product) used to be (less actually). There's no denying Nvidia are just being greedy **** this generation, and as previously said, COVID+crypto has made them realize people will actually buy the stupidly priced cards on mass while convincing themselves somehow it's a good deal so they don't feel ripped off (which they are).


    I mean the only card they kept the MSRP the same is actually worse than the same tier from last generation 🤷‍♂️



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki


    Talking of power, I've found that using Nvidia Performance overlay, I can lower the power target to 70 or even 65% and still get near-enough the same performance, at MUCH lower power draw (325W -> 200W) / heat generation.

    Obviously this is game dependent, but I don't play many demanding titles right now; and in those I'm happy to cap at 60 or 120fps anyway.



  • Registered Users Posts: 5,829 ✭✭✭Cordell


    The thing is that a lot of recent titles are not GPU demanding because they are so poorly optimized. That's a bigger issue than GPU prices, it's a trend that might even lead back to 30fps being the accepted standard.



  • Registered Users Posts: 5,556 ✭✭✭Slutmonkey57b


    This is actually the most interesting trend for all three companies most recent releases - if you trim the voltages or power on CPU's and GPU's by a small margin you can retain 85-90% of the performance at significantly reduced power. Even locking AMD's CPU's at 65w compared to the 170w ceiling resulted in retaining most of the performance.

    All that proves is that the industry has been putting buyers into unnecessary overclocker territory for headline benchmark margins where, if reviewers provided better analysis, buyers would be able to save money more easily on better engineered, safer products.



  • Registered Users Posts: 105 ✭✭cornholio509



    Its not really that interesting of a discussion . Nvidia are trying to be the next closed ecosystem . I remember the BFG days . THeir drivers were better and yes BFG did write drivers . BFG was the original reason for GAME READY DRIVERS . It didnt take long until that got locked down by nvidia . The next thing nvidia locked down was base clock speeds . Now they are locking down the total boards specs . If nvidia cant lock it down by dictating what AIBs can and cannot do contract wise , they will price AIBS out of it with their founders cards . Hence EVGA ending their partnership with NVIDIA .

    Currently the only way AIBs can differentiate at the moment is on cooling and overclocking . The RTX 40x0 has the hardware locked down as much as nvidia can without hurting their own profits . Its only a matter of time until NVidia prices AIBS out of that side of things . Nvidia are trying to be the next APPLE .

    The issue of benchmarks is that consumers in general have the attention span of a gold fish . Manufatcurers know this so they limit the time reviewers have with their products . Every tech reviewer has been moaning about this for years . They have allways complained that they dont have time to do the testing they want to . Last minute driver changes or GPUs landing for testing 24hrs in advance of launch . Last minute price drops forcing articles to be rewritten or , reshoot videos . Heck even prominant tech tubers have been blacklisted for reviews or even flat out threatened by NVIDIA .

    What needs to happen is consumers need to change their attitudes towards how they perceive these companies . Money makes their world go round . SO when products are overpriced or bad just boycot them . I was using ATI at first . I moved to nvidia becuase it was better and cost roughly the same . Now i have dropped NVIDA like a bad habbit . I could easily afford a 4090 . However looking at nvidia excuses for the costs i went AMD insted . I would have gone intel only for they are so new and drivers were bad . Maybe when i upgrade next intel could be a consideration as long as they get ontop of their driver issues . Unless nvidia change coarse i wont be buying for the forseable future .



  • Advertisement
  • Registered Users Posts: 5,556 ✭✭✭Slutmonkey57b


    Tech reviewers have allowed all three companies to successfully push the pr spin on what they're doing, though. Derbauer's recent interviews with a thermal engineer from intel was interesting, but when the topic of why current parts have got so ridiculously hot and power hungry came up, the answer of "why not, it's perfectly safe" wasn't really challenged. Derbauer's background is overclocking, so he naturally feels that's an acceptable answer, but for normal consumers it shouldn't be. Very few reviews note the actual power requirement of intels latest generations, and still categorise them according intel's "official" (dishonest) binning.

    So the review community in my opinion has a lot of improvements to make so that the ball isn't 100% in the users court, because without users on the likes of reddit investigating, half the **** ups wouldn't have been picked up by the press. "I don't like the mechanics of this new connector, it doesn't click in like the old one did and doesn't feel as secure" shouldn't have been a hard investigation and conclusion.



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki


    4060 Ti's 128-bit bus once again hurting its performance.

    4070 looking like a good buy in Nvidia's stack.



  • Registered Users Posts: 105 ✭✭cornholio509


    @Slutmonkey57b . Tech reviewers havent allowed anything as they dont have the power to effect change . I cant count the amoutn of times tech reviewers have asked consumers not to fanboy or pre order any tech untill reviews are out . Consumers still do it . A common thing is with every peice of PC tech is to leak some specs 6 months in advance . Consumers get hyped up and buy into the product before its even launched . Get a benchmark out here or there during the validation phase . Then they anounce the GPU/CPU with a rough launch date to intensify the hype without really showing anything other than the design . Keynote happens with some slides but no real numbers with a launch date . Pre orders then happen . Launch day happens everyone who has bought into the hype hits buy in their online basket , then hope reviews match their expectations . If reviews end up crap they defend their purchases by attacking the reviewers .

    Again consumers get warned by tech reviewers trying to change the status quo . Still we act like petulant children and ignore sound advice . Ultimately empowering big tech to treat us like money bags to be manipulated .

    Now on the power and heat issue well lets not go there . Its too techical a topic . However if engineers were worried about power and heat from consumer products the most advance PC you would have is a commadore 64 on a crt screen . Engineers know more about how to manage these things than we do . ANd yes sometimes you do have to throw power at a problem until a better solution comes along .

    I think the 4070 is only compelling because of the 4060 to be honest . For the NVidia diehards its only compelling if your still using a 1070 or 1080 . Maybe even the 20 series of nvida gpus there is an argument for it . Beyond that the 30 series or AMD gpus are a cheaper solution and a better argument for an upgrade .

    Dont get me wrong i always bought NVIDIA . However i cant come up with any reason why i should stick with them . I did go AMD for my new personnel rig . DLSS 3 and better raytracing just doesnt justify the cost to me .



  • Registered Users Posts: 31,015 ✭✭✭✭Lumen


    CPUs are hot because Moore's Law is dead and so the silicon has to be pushed to maximize performance per core. Unless you run Apple Silicon, but that's no good for gaming.

    GPUs are hot to maximise performance per die area. We can have very efficient, high performance GPUs with massive dies on smaller process nodes running below their outright performance potential but where's the profit in that?



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki


    Apple M1/M2 is actually quite capable of gaming, especially considering the low power draw.

    They recently released a game porting toolkit and it's getting good results even on M1 hardware.

    r/macgaming has lots of results for this.

    And people are making tutorials for us lesser-understanding folks: https://www.outcoldman.com/en/archive/2023/06/07/playing-diablo-4-on-macos/



  • Advertisement
  • Registered Users Posts: 5,556 ✭✭✭Slutmonkey57b


    My point is the delta between "maximum performance" and performance at a much lower power / heat envelope is not very big. This is not a new challenge, as anyone who was around for the pentium 4 is well aware. The difference is that today, the "stock" settings are out in the extreme where the return on investment is miniscule, and only severe overclockers used to hang out.


    All the companies should have been called out harder, but it was only when users started reporting failures that tech reviewers who toed the company lines of "this 260w processor is actually a 125w processor" or "95 degrees is an acceptable heart target" started backtracking a bit. If these tiny 5fps "wins" in shittily made games weren't reported as "wins" then customers would be better served.



  • Registered Users Posts: 105 ✭✭cornholio509


    You are under the illusion that CPU's and GPU's we get are designed for us . No they are not and far from it . WHat we get is what failed to be a server product . Be it cache failure , not capable of designed wattage draw or some other flaw . Engineers then test to see what is capable of what then its a consumer chip . I am sure nvidia , intel and AMD would love for everybit of silicon to be perfect . Unfortunately the smaller the node the higher the failure rate . Return on investment on silicon is down to what can be recycled into into a different product .

    DO i agree with what they do to make consumer chips ? NO . It is however what happens . YOu me and every reviewer gets no say in how these CPU's / GPU's work . We can only say a product is crap by not buying it .



  • Registered Users Posts: 5,829 ✭✭✭Cordell


    That's not exactly true, matter of fact is more false than true. It is true that they design platforms and architectures to serve all markets and the most profitable markets dictate which features are prioritised, but that doesn't mean that the silicon we get in the consumer products is silicon that failed to qualify for other markets.

    Also features like ray tracing and DLSS were designed for consumer market.



  • Registered Users Posts: 13,981 ✭✭✭✭Cuddlesworth


    I'm pretty sure that statement would only really be true about Nvidia at a push, but all they have done is limit their top end chip to silly consumer products like the 4090.

    AMD split its GPU's out in RDNA and CDNA architecture, they literally fab out different datacenter and consumer chips for a while now.

    On the cpu side AMD produces the same CCX/CCD chiplets but the server CPU's run a generation behind usually. What they bin for is significantly different, high frequency, high power isn't something the DC's want, they want high core count, low power which is usually linked to lower frequency. Either way, their process is so efficient that their defect rate is allegedly so low, they cripple a lot of CPU's to market segment vs binning them to sell top end parts.

    Haven't really looked at Intel in a while but their fab process in the past was 3 distinct chips, desktop, HEDT and server. I think HEDT and server have merged now but they are still completely different chips if I'm not mistaken. Only major issue I have with them is they literally hold back their top end chips to paper sell some KS chip while running scared of AMD's parts.



  • Registered Users Posts: 5,556 ✭✭✭Slutmonkey57b


    I think you're confusing binning the fab outputs vs product design.

    Prior to ryzen, it would have been grossly inefficient to design behemoth server cpus, and then hope that the defect rate was high enough to gain sufficient volume to sell to the consumer market, especially when you bear in mind that the vast majority of consumer sales are in the business space.

    There would only have been very special circumstances where a company would think it was viable to waste server dies in consumer channels, instead of just settling a cheaper server sku. Nvidia being given almost free wafers from Samsung because they didn't meet yields would be one example.


    AMD have designed ryzen so that the difference between a server and consumer cpu is done at the packaging stage where a ccd and the i/o and 3d vcache are put together, not the design or wafer stage. That means their design for a ccd is potentially falling between two stools compared to a fully optimised server or consumer design, but that's more than offset by the cost reduction, yield improvements, and packaging opportunities offered by splitting the ccds out in the first place. Not only is it not the case that Epyc cpus are being "turned into" consumer chips, the binning to determine which ccds are going into Epyc or Ryzen packaging is done before the packages are put together, so... No.


    Intel still use fundamentally different monolithic dies for server and consumer, where the i/o is baked into the same die as the compute clusters. Again, to take huge cpu die with traces for loads of cores and pcie outputs, and turn it into a celeron... Something very massive would have needed to go wrong to make that an economically viable option. They're just about to move to a similar "tiled" setup as AMD anyway.


    Apple and nvidia are the "ridiculous monolith" holdouts now. Nvidia will have to move at some point, and apple don't care if their yields are crap because their customers will always buy whatever crap they put out regardless, so they can just bump the price to compensate.



  • Registered Users Posts: 5,556 ✭✭✭Slutmonkey57b


    Other way around, AMD release new generations on epyc first, and then follow with ryzen.



  • Registered Users Posts: 13,981 ✭✭✭✭Cuddlesworth


    In what terms exactly?

    Zen 1

    Ryzen 1000, March 2017

    EYPC Naples, June 2017

    Zen 2

    Ryzen 3000, July 2019

    EYPC "Rome", August 2019

    Zen 3

    Ryzen 5000, Nov 2020

    EYPC Milan, March 2021

    Zen4,

    Ryzen 7000, September 2022

    EYPC Genoa, Nov 2022 

    My timing was off because in my experience server markets move really slow. A official release date of a server chip means you might see servers with them, 6-9 months later. But not the server chips come first kind of off. Threadripper on the other hand....



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki


    RIP GPU market



  • Registered Users Posts: 4,507 ✭✭✭TheChrisD


    I think the image needs a legend of some kind. Why are CUDA cores the benchmark, and why is the 40 series all reduced down in some manner?



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki


    A complete AD102 GPU has 18,432 cores (see Ada whitepaper) while RTX 4090 is 88.88% of a full die, i.e. 16,384 CUDA cores (red zone, 87.5-90%).

    The OP gave the greyed-out areas as "gaming GPUs CUDA count compared to RTX 4090", e.g. 4080 = 59.375% (9,728/16,384).

    Main gist is though: compared to the last 5 generations, every Nvidia 4000-card has been cynically overpriced and underperforms compared to the top-end chips.

    • 4060 => 4050
    • 4060 Ti => 4050 Ti
    • 4070 => 4060
    • 4070 Ti => 4060 Ti
    • 4080 => 4070
    • 4090 => 4080 Ti

    If those practices had been kept, I wouldn't be moaning about "4060" and "4060 Ti" not beating the 3060/3060 Ti at higher resolutions - just beating them at 1080p while being budget cards (somewhere between $150-220) would've been amazing!

    Same for the "4070". If it was a $350-400 GPU keeping pace with the RTX 3080, it would've been astonishing & similar to when 970 = 780 Ti, 1070 = 980 Ti, or 3070 = 2080 Ti. Instead, it's "same performance, regression at 4K, only $100 less".



  • Advertisement
  • Registered Users Posts: 7,409 ✭✭✭Icyseanfitz


    Yeah it's pure **** really, I've started a new build but GPU will be the last thing I change if I even do 🤷‍♂️ wouldn't surprise me if Nvidia is trying to kill off it's gaming GPU market in order to push people to the cloud.



  • Registered Users Posts: 4,507 ✭✭✭TheChrisD


    Taking the full die as 100% is pretty disingenuous I would argue; and even comparing just the CUDA cores doesn't reflect the actual real-world performance of the cards, especially in the 40-series where the 4080 is about 75-80% that of a 4090, and even my 4070Ti is about 60-70% of a 4090.



  • Registered Users Posts: 597 ✭✭✭Aodhan5000


    I think that what they're trying to get at is:

    Less cuda cores = Less die size = Less cost for NVidia.

    This reduced cost should be passed onto the consumer. That is not happening.

    For example:

    4080 Die Size: 379mm^2

    3080 Die Size: 628.4mm^2

    4080 MSRP: $1,200

    3080 MSRP: $699


    4070 Die Size: 294mm^2

    3070 Die Size 392mm^2

    4070 MSRP: $599

    3070 MSRP: $499


    4060 Die Size: 190mm^2

    3060 Die Size: 276mm^2

    4060 MSRP: $299

    3060 MSRP: $329


    Now I think we can all understand that newer process nodes incur greater costs than older nodes as do greater VRAM capacities and even inflation, but that doesn't come near to justifying the drastic price increase NVidia has applied to this generation.

    Sidenote: Cuda cores may well be a more effective comparison tool than die size as with dies such as those for the 3080s, some cuda cores were disabled.



  • Registered Users Posts: 4,364 ✭✭✭Homelander


    People get what they pay for. If people keep buying Nvidia's grossly inflated price wise/misleadingly inflated series number cards, we can expect this to continue.

    I find it quite honestly scary the amount of people trying to defend the situation, including a small minority on here.

    What next, an RTX4030 for €249.

    It's bonkers especially in the US, where a vastly faster RX6700XT is the same price or cheaper than an RTX4060. Yet AMD still are just chipping away at a rock face in terms of market share.



  • Registered Users Posts: 31,015 ✭✭✭✭Lumen


    The only 40 series card whose pricing makes sense is the 4090, because with upscaling tech the only people that "need" one are VR nerds playing flight sims on 2k+ headsets or people using them for work.

    All the others are one tier too expensive and mostly short of VRAM.

    Totally unnecessary short term avarice which will stunt the market they're trying to sell in to.



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki



    ...I should probably stop saying good things about AMD.



  • Registered Users Posts: 5,556 ✭✭✭Slutmonkey57b


    I immediately distrust videos which contain frontrunner statements like "you probably didn't know" and that's certainly true here. The xtx being very power inefficient, drawing high loads at idle, and being a bit of a basket case are very well covered at this stage. All this video has is some more specific game benchmarks to highlight that.



  • Registered Users Posts: 18,703 ✭✭✭✭K.O.Kiki


    I didn't realise it drew maximum power at almost all workloads though, hah.



  • Moderators, Computer Games Moderators Posts: 14,689 Mod ✭✭✭✭Dcully


    Was thinking of grabbing a 4070 for prime day purely just to have it got if there are any deals.

    Im not sure if there is any point putting it into my current 8600K rig but my thinking is maybe prime day prices will be the best we can hope for GPU prices for a while?

    I dont actually need a new build right now but im just thinking ahead and if i could get the most expensive component now.

    Having said that i bought a lovely new headset 2 weeks ago Epos H6Pro,had a look there now and they claim it is now 33% off for prime day but its exactly the same as what i paid two weeks ago.



  • Advertisement
  • Registered Users Posts: 2,656 ✭✭✭C14N


    I haven't been following GPU new until lately, but my own GPU is hitting 9 years old this summer so I'm long overdue for an upgrade. I wanted an nVidia as I mostly use it for photo/video editing and the software I use takes advantage of proprietary nVidia features and runs better on nVidia in general. The coverage of the latest gen of nVidia hardware has been very negative though, and the consensus is that pretty much all of the mid-range cards (i.e. <€500) are a very bad deal.

    Are these just considered insufficient updates to the previous gen though or are they just bad deals all round? If I'm in a position where I'm coming from a really old card and wanting to upgrade ASAP, is it still worth looking at? A lot of reviews mention that the 8GB of VRAM that most of the cards come with is a pretty severe limitation in 2023 (and for video editing guides recommend it as a minimum for 4K), but the previous gen also largely carry just 8GB too. The exceptions with the new line are the 4070 but spending €600+ is a real stretch to get to 12GB, and the 4060 Ti has a 16GB version coming out in a week or so.

    At this point in time, would I be better off just trying to get a cheaper 8GB 3060/3060 Ti to tide me over for a few years and save the difference for the next upgrade down the line, or would it be worth paying the extra for something like the 4060 Ti with 16GB?



Advertisement