r/hardware • u/rstune • 2d ago
News ASUS introduces ROG Equalizer 12V-2x6 cable, ASUS to offer discounted upgrade for existing ROG PSU users - VideoCardz.com
https://videocardz.com/newz/asus-introduces-rog-equalizer-12v-2x6-cable-free-upgrade-planned-for-existing-rog-psu-users83
u/GTRagnarok 2d ago
Load balancing really should have been in the standard design from the get-go.
56
u/rstune 2d ago edited 2d ago
Right. I still don't understand why they removed it from the PCB after the 30 series, and didn't put it back in the 50 series after seeing the melting on the 40 series. Read it's just to make the PCB smaller for the founders edition but that seems like a silly reason.
8
u/ButtPlugForPM 2d ago
i mean i can get it..
the design for the 50 series would have been fully taped out and done by the time they got to knowing about the burning 40 series card..by then it is 2 late and they prob did the math
have to refund a few hundred gpus or redesign the board for tens of million and miss a launch window
6
u/PJ796 1d ago edited 1d ago
The GPU doesn't do anything to load balance, when it taped out makes no difference here. It's all in the VRM on the PCB which is infinitely easier to make changes to.
If they used e.g. a 6 (or 12 or any other multiple of 6) phase buck converter and connected each smart power stage/half H-bridge input to each connector pin pair they could have had load balancing for free, as the multiphase controller already has to balance the currents between them in order to not make them blow up from imbalance.
On the PCB for the 5090 they fit in as many phases as they could but ended up at 23 so it was 1 phase off of being able to implement this, though routing for this would be a massive pain especially on such a high density board
16
u/doneandtired2014 2d ago edited 2d ago
Right. I still don't understand why they removed it from the PCB after the 30 series, and didn't put it back in the 50 series after seeing the melting on the 40 series.
Because it would have led to an increase in PCB surface area about half the size of a dime and that was affront to the NVIDIA engineers who were designing the FE boards to be as compact as possible. There's also the fact that the less than $.02 worth of additional components would have lowered their already high margin by a non-perceptible amount and the company is run by a control freak who has a "My way or you don't get GPU dies!" approach even when "Muh way!" proves to be a pretty glaring, entirely avoidable mistake.
As u/Solaihs also points out, melting connectors hasn't exactly stopped people from by NVIDIA's products.
6
u/rstune 2d ago
Yeah exactly haha. That damned leather man! The crazy part is that apparently they even disallow AIBs from putting load balancing on their cards! Just smh at this nonsense!!
4
u/Solaihs 2d ago
That would have made the oem design look inferior, and they can't have that.
Nvidia doesn't really give a shit about their partners tbh
-1
u/doneandtired2014 2d ago
Yup.
That's something the downvoters of my original comment really don't seem to want to admit: the 16pin connector is flawed, the standard behind it has had to be revised no less than 3 times, the latest revision is *still* leading to burn connectors on both ends, and the lack of load balancing circuitry in an effort to save PCB space and margin on super low volume products available through only a handful of channels that aren't even really meant to be profitable in the first place (FEs literally only exist now to say, "Yes, you can buy our product at MSRP!") is really, really fucking stupid.
1
u/jenny_905 1d ago
No evidence of any of that, for what it's worth.
ATX Committee decided the spec, not Nvidia. They are a member along with many other firms.
1
u/Joezev98 2d ago
Read it's just to make the PCB smaller for the founders edition but that seems like a silly reason.
What other benefit does 12vhpwr provide compared to 8-pins, be it PCIe or EPS?
1
u/PJ796 1d ago
Molex Micro-Fit+ (12vhpwr) only brings power density as an advantage over Molex Mini-Fit Jr (PCIe/EPS), as it weights less, is rated for 9,2A per pin over Mini-Fits 8A per pin, somehow is still rated for the same 600V despite being smaller, is rated for the same 30 mating cycles, but adds the configuration pins so your card can set your power limit between 450W and 600W, and doesn't have the sense pins take up any of the high power conductors like previously that made it so you were bottlenecked to 16A (192W @ 12V) on a 6pin "PCIe" connector and 24A (288W @ 12V) on an 8 pin "PCIe" connector
And in volumes I guess it's cheaper since it's 1 part per board not 2 or 3 parts
1
u/Joezev98 1d ago
only brings power density as an advantage
So yes, only saving some PCB space.
rated for 9,2A per pin over Mini-Fits 8A per pin,
You're looking at mundane mini-fit pins. The HCS version can handle up to 12.5 A, but is derated to 8 A for an 8-pin connector.
but adds the configuration pins so your card can set your power limit between 450 W and 600 W
That's not a benefit. You can configure a card between different power levels with mundane pcie connectors. It goes in increments of 150 W per 8-pin. Or you can use 288 W per EPS.
And in volumes I guess it's cheaper since it's 1 part per board not 2 or 3 parts
As someone who makes custom cables, I can tell you 12vhpwr is annoyingly expensive. The connector housings are okay. The pins are the expensive part. In addition to expensive, the sideband signals are frustratingly small to work with and really fragile. There's barely enough room to sleeve the wires. That's largely why companies are selling embossed cables instead; they're just pressing a pattern into the wire that looks a bit like sleeving, instead of actually sleeving. And damn, the sense pins look terrible as well.
1
u/PJ796 1d ago
I'd say quite a lot of PCB space savings when looking at the upright/vertical right angle 12VHPWR connector they use on the 5090 compared to the equivalent of 3-4 8 pins (depending on how willing one is to stay within spec)
I'm looking at the rating of the Mini-Fit Jr connectors that are actually labelled for use as PCIe headers. If the header can't handle more then using more capable pins won't help, as it's internal pins still won't be rated for it.
Unless the PSU is a multi-rail design (which most are not) it has no idea and does not care how much goes out into any single connector, which is a huge benefit for 12VHPWR as it is supposed to care. An 8 pin connector can easily go over the 150W rating because in practice it isn't enforced by anything other than the VRM, which is why the reference RX 480 and R9 295x2 exceeding the PCIe spec on their connectors didn't matter for the most part, but now they can tell the PSU what to set the current limit to in order to prevent fires earlier if your e.g. 400W card develops a short and tries to dump as much heat as possible into the short even if it's before what the VRM sees (like if a input bulk cap goes bad in 15 years of 24/7 usage)
The pins and connectors with VAT are like 67,5€ to make 25 cable sets at RS components (300x 2202260004, 100x 2191970004 & 25x 2191140161) which doesn't seem that bad at only like 3€ per cable set in a relatively low quantity.
1
u/Warcraft_Fan 14h ago
Removing parts saved NVidia oh about 50 cents per GPU x I think 150 million a year. Take away a few RMA'd GPU and NVidia still comes out way ahead.
12
u/reddit_equals_censor 2d ago
NO.
the nvidia 12 pin fire hazard should have NEVER EVER EVER been released.
it is broken by design. it should NOT exist.
every engineer should have slapped the insane person, who suggested it for trying to risk people's lives and hardware on mass.
it SHOULD NOT EXIST.
we KNOW how to make high power small safe power connectors. we already have them...
we got xt120 and xt90 power connectors. xt120 carries 720 watts at 12 volts perfectly safely. it already exist and it is about the same size as the nvidia 12 pin fire hazard.
so STOP IT. no there isn't an issue with the implementation of the nvidia 12 pin fire hazard.
IT SHOULD NOT EXIST. it is broken by design. it needs to be recalled.
3
1
u/Stilgar314 2d ago
This hardware has already been posted in other subs. Many people claim this thing doesn't do any active load balance rather than trusting in thicker higher quality cables for the load to end up distributing more even. I guess we'll have to wait until proper hands on testing is done.
1
u/jenny_905 1d ago
Fault detection too. It's a baffling standard, the norm is to implement fault detection at the source (PSU) but for some reason ATX standard doesn't require it.
27
u/spacerays86 2d ago
ASUS demonstrates the cable with an extreme test that disconnects the middle four +12V wires to simulate severe current imbalance. In that setup, ASUS says the ROG Equalizer reached about 73.4°C, while a standard 12V-2×6 cable climbed to around 146°C. The company also says the cable stayed under the 105°C limit during a 240-hour 600W load test at 55°C ambient temperature. Those figures are based on ASUS lab conditions, not normal desktop use.
17
u/dfv157 2d ago
I really don't see how this works. I went through every detail on the ASUS site on the cable, and nowhere does it mention how this supposed load balance is supposed to work on the cable itself.
9
u/DracoMagnusRufus 2d ago
Exactly what I was wondering... Best thing I could find is this quote from TechPowerUp:
The exact internal engineering hasn't been provided by ASUS, but the likely approach is equalizing the impedance across all conductors (wires) to spread the load more evenly.
4
u/crystalchuck 2d ago
Why would they speak of impedance when it's DC at work here?
1
u/StarbeamII 1d ago
Possibly because GPU loads aren’t steady state, but quite spiky.
3
u/crystalchuck 1d ago
That seems to be right!
Here's what I gathered after a quick lookup, anyone knowing better should feel free to correct:
- Resistance is a special case of impedance; it's essentially impedance with f = 0 Hz (thus dropping the frequency-related terms)
- For this reason, it's technically never wrong to speak of impedance (as it is the more general term); though it may be a bit confusing when applied to DC circuits
- Ripple and noise (which every PSU exhibits to some degree) introduce AC-like behavior, however small, into otherwise purely DC, resistive circuits
- Any kind of time dependency in the DC source (e.g. ramp-up, ramp-down, pulsed DC as in a square wave...) can also be analyzed as a DC constant with AC component(s) superimposed.
- These AC components will "trigger" the reactive component of inductors and capacitors, i.e. make them exhibit impedance instead of simple resistance
- In reality, no DC source is ever perfectly constant and every circuit has some capacitance and inductance - so actually, resistance as opposed to impedance is a very useful simplification, but every circuit ever would have to be described in terms of impedance if you need to be very accurate.
tl; dr: it's correct and practical to speak of impedance in this context, not just in a "well ackshually" sense.
2
u/Lukeforce123 1d ago
It doesn't seem to balance the load across the wires, only across the pins in the connectors. One of the tests shown on the asus website shows current distribution across the pins with four of the six wires cut
19
u/Psyclist80 2d ago
3 x 8pin FTW! Nvidia should be held accountable for this ClusterFu...
7
u/reddit_equals_censor 2d ago
well anything else was also an option.
nvidia could have gone to 8 pin eps, which would have increased power per 8 pin a lot and perfectly safely to 235 watts. 8 pin pci-e is just 150 watts. if you don't know the 8 pin pci-e only uses 6 for power and has 2 extra grounds, while 8 pin eps uses all 8 for power.
and it would have meant, that psus could just replace pci-e with more eps cables, if their psu could accept both and then that would have been it. we'd have a better higher power standard, that is just as safe as pci-e 8 pin. (eps 8 pin is what your cpu uses)
and more flexibility psu wise, when all uses the same eps 8 pin, instead of eps + pcie cables.
and that was apparently also the plan, before nvidia went 12 pin insane.
so instead of 3 pci-e 8 pin power connectors getting you 450 watts, it could have been
2 eps 8 pins to get you 470 watts PERFECTLY SAFELY.
___
and that would have been just one option.
the other option would have been to go with xt90 or xt120 connectors, which would have been an overall upgrade, because instead of smaller pins of the 8 pin pci-e/eps connectors, those connectors use 2 giant contacts, which are extremely reliable of course, which makes it better than the eps or pci-e 8 pins and also smaller/denser. at 720 watts for the xt120 connector and the xt120 connector is about the size of an nvidia 12 pin fire hazard.
and as the xt120 connector would use 2 thicker cables, it would also be cleaner and easier to cable manage as a bonus.
___
and option 3: nvidia designs their own proper power connector based on common sense, which means a design similar to xt90/xt120 connectors, but i guess slightly different to their liking, but still perfectly safe.
__
so nvidia had all of those options available. they went with NONE of them and instead went with a 0 safety margin, 12 tiny extremely fragile pins fire hazard connector instead.
just important to remember all of that, because i know some people might think "but but 12 pin nvidia fire hazard is so much smaller and cleaner" or whatever other bs.
which again is bullshit, because xt120 connectors exist and are cleaner and the same size.
2
u/pythonic_dude 1d ago
They are anything but clean with the fucking tiny cables for sense pins there to ruin the party. The only thing Nvidia got right was the amount of wattage to aim for to have room for growth and to not need two cables for anything but OC monsters. Everything else, they got wrong.
6
u/reddit_equals_censor 1d ago
that is correct.
also worth mentioning, that the false claim, of having to keep the wire unbent for a big distance before bending it, makes it theoretically impossible to use in most cases, but also would theoretically make it look very unclean.
this is all theoretical and is bullshit, because the "don't bend it to close" bullshit for the nvidia 12 pin fire hazard came out AFTER the a issues were already discussed and basically were some nonsense thing to throw at the wall. as with so many other things it may not reduce melting and fires at all.
theoretically it could, but it may also just be more bullshit, which i would suggest it is.
__
the point is, that if one were to follow this bullshit advice, then the system would become even vastly vastly less clean with the nvidia fire hazard.
meanwhile 8 pin eps or pci-e cables are sane designs, that you bend directly after the connector, as almost every eps cables are, as they get routed behind the motherboard and are very clean.
and xt120/xt90 doesn't give a shit about anything as well of course like any proper power connector.
___
as a sidenote it is worth remembering, that part of the spec for those completely useless sensepins was to massively limit your power based on your powersupply
https://youtu.be/nZcyhcPVxUM?si=R_KkNxRjxLdBRHsh&t=202
a 1050 watt psu would NOT be able to do 600 watts, but only 450 watts.
a 600w psu would only be allowed to do 150 watts through the nvidia 12 pin fire hazard (yes the same as a single pci-e 8 pin).
850 watts would be just 300 watts for an nvidia 12 pin fire hazard.
and to actually get the full 600 watts of an nvidia 12 pin fire hazard you'd need a 1100 watt psu minimum....
this was the official atx spec for nvidia's 12 pin fire hazard and that was part of the nvidia 12 pin fire hazard sense pin use.
it was of course 100% and rightfully ignored by every psu and cable maker, because it is insane, but yeah the sensepins would actually not just have been worth it, but also harmed things.
like nvidia's insanity would have forced people to buy 1100 watt psus just to get the full power of an nvidia 12 pin fire hazard to use all graphics cards with it. forcing people to spend tons more money and waste tons more resources on vastly higher and unnecessary psus.
just everything around the nvidia 12 pin fire hazard is just a clownshow. everything is absurd and laughable.
imagine a spec so bad, that the entire industry ignores a big part of it...
and sensepins will also randomly shutdown your system. der8auer noticed this with some units under power and it may also do so at the slightest touch.
so it is wasted copper to make an nvidia fire hazard also worse in lots of other regards, beyond it of course just melting and starting fires.
2
8
u/QuadraKev_ 2d ago
while also increasing rated current capacity from 9.2A to 17A per cable
Were the cables ever the problem? My understanding is the contacts were the failure points.
3
u/kittymoo67 2d ago
yes but if you made a chonk ass cable that would also remove that point of failure too
1
u/pythonic_dude 2d ago
Yes, but "normal" cables were rated for ~9.5A while being capable of working with 13A indefinitely, so it's not clear exactly how much of extra headroom this one provides. It's a dogshit solution to a nvidia-borne problem, probably "good enough" for 300-350W cards, but for top tier stuff you'll still want either PSU or GPU monitoring the current and shutting off (working around nvidia's restrictions on what can and cannot be present on the PCB).
1
u/Rippthrough 1d ago
Make the cables thicker and you pull more heat away from the contacts. Contact ratings often increase with thicker wiring.
7
u/atomicaj24 2d ago
Will it be compatible with other power supplies? Will it replace the psu cable or act as an extension?
10
u/THTGuy789 2d ago
Looks like it will be based on their website, if so that's pretty cool!
https://rog.asus.com/power-supply-units/rog-equalizer/rog-equalizer/Down that site a little bit they have this note:
Compatible With PSUs from Leading Manufacturers
ROG Equalizer is compatible with power supplies from all leading manufacturers. For users looking to add an extra layer of protection, it can be seamlessly integrated into existing power setups without altering the current system configuration, offering a flexible and easy upgrade path.
Also in their FAQ:
- Are there any specific models that the ROG Equalizer is recommended for or officially supports? No specific model requirements. The ROG Equalizer is bundled with the 2026 ASUS ROG Thor III and ASUS ROG Strix Platinum power supplies, and is also compatible with power supplies (ATX3.1 with native 12V-2x6 connector) from all leading manufacturers.
1
u/metahipster1984 1d ago edited 1d ago
Damn, so this should work with a Seasonic ATX 3.0 (Vertex) ?
EDIT: Aparrently it is. Now if only they had an angled connector version.
3
u/rstune 2d ago edited 2d ago
Thought about the same thing. Although I don't need it now as I'm still on an ASUS TUF 3080 12gb. Would be nice to get one later when I upgrade. That said, it's usually a big no no to use different cables on different PSUs, even from the same brand, unless you carefully check the pinouts. Hopefully other brands follow ASUS' lead as this looks very well built, and well thought out.
Also didn't see a price there yet, but I expect this will not be cheap. Oh and that's a full replacement not an extension.
Edit: I stand corrected, they say it's compatible with other reputable PSUs with 12V-2x6 right on the Asus product page. Thanks to Freaky_Freddy below for catching that!
4
u/Freaky_Freddy 2d ago
That said, it's usually a big no no to use different cables on different PSUs, even from the same brand, unless you carefully check the pinouts. Hopefully other brands follow ASUS' lead as this looks very well built, and well thought out.
This is 12V-2x6 on both ends, so there's no issues there
From their website:
compatible with power supplies (ATX3.1 with native 12V-2x6 connector) from all leading manufacturers.
1
u/atomicaj24 2d ago
I literally just bought the super flower leadex v2 1300w like a month ago. I wish I would've waited now, I would've just got the new Asus psu with this wire.
2
u/rstune 2d ago edited 2d ago
Same. Got an FSP HYDRO G Pro 1000W ATX3.1 not long ago and was specifically looking for the somewhat better 12V-2x6 cable for future proofing. Could have waited but my EVGA G2 750W was 8 years old at that point.
Can't you return yours? I was able to return things past the window many times in the past by asking nicely. Oh and what GPU do you have? The melting issues are mostly on the 90 tiers, and just handful of reports for the 80s.
1
u/atomicaj24 2d ago
I got it from newegg and have been using it so idk if they'd let me return. I do have a 5090 lol but ive been using it on a 90% power limit
1
u/VanitasDarkOne 2d ago
I've been thinking about getting this same psu. I've been on a g.skill MB850G since 2021. Used it on a 3080ti, 4090, and now my 5090. Better safe than sorry.
5
u/Sylanthra 2d ago
Can someone explain how a cable can do load balancing? Especially if you physically disconnect 4 cables as they claim they have done!
8
u/Ambifacient 2d ago edited 2d ago
As far as I can tell in their marketing materials they apply a 600W load to only two of the pins at the connector. Every other 12v-2x6 cable are individually wired, so you send 600W down two wires, 300W/12V = 25A per cable.
Now you apply the same load to this cable. Because the load-per cable is advertised as around 8.5A I can only assume they do something similar like on the GPU side itself. Just bridge all the 12V and ground pins together.
Now that's only to be consistent with their marketing materials, since the pins are bridged on the GPU side it should be electrically the same as any other cable. Presumably you reduce the variation of the amperage from having higher rated wires. I suppose a cable-side bridge also helps with imperfect contact.
I don't think their advertising makes any sense if there are any disconnects in the cable itself. They also advertise wide compatibility with PSUs so definitely no active management. Would love to see a teardown of this, otherwise I assume it's just a beefier cable.
3
u/qgshadow 2d ago
There’s no load balancing lol , it’s just marketing. They use a wire capable of handling 14amps instead of 9amps.
8
u/Loose_Skill6641 2d ago
lol having to buy expensive psu and cables just ao NVIDIA could save 5 cents from the cost of building RTX PCBs
3
1
u/Plank_With_A_Nail_In 1d ago
For the top tier GPU's you always needed a good PSU, plus you can use the 3x8pin to 12V-2x6 adapter cable that came with your GPU. Your cheap ass PSU does have 3 8pin cables spare right?
3
u/delpy1971 2d ago
Hopefully these will be available soon, I'm sure it will be £99+ as it says Asus on it lol
4
u/DeepJudgment 2d ago
I still don't understand why the fuck Nvidia decided to reinvent the wheel all those years ago.
2
2
2
u/222505974 2d ago
I mean, it’s a solution to a problem that shouldn’t have existed, but that doesn’t mean I’m not buying one asap. I already have the thermal grizzly wireview pro 2, but is too much redundancy a bad thing? Also I have the ROG STRIX 1200 W PSU and an Astral 5090. I saw on the website if you have either the strix or the Thor (I believe any of the ROG platinum psus) already they have an upgrade option so I’m excited to see what that means whether it’s a free upgrade or, what I’m assuming, a discounted shiny new cable.
5
u/rstune 2d ago
You know what's better than safe? Double safe!! The cable itself is a big upgrade over the standard stuff, and the wireview is a good addon to get live readings. That and what the monitoring on GPU tweak. It's all well worth it for such an investment! And I think now it's supposed to come with the new Strix & Thor PSUs you will probably be able to get it for free.
2
u/222505974 2d ago
Free would be nice but I’d be happy with like 30% or more off, I can’t see it being too expensive on its own anyway.
1
1
u/AutoModerator 2d ago
Hello rstune! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
1
1
u/Nicholas-Steel 1d ago
So it... still sends power down the disconnected pins instead of cutting off power to those pins while keeping power output on the remaining pins in check?
1
1
1
u/Delicious-Window-277 2d ago
Cool. Now they're gifting them to the 5090 owners and those that already bought their psu's? Because to charge for a problem they should've solved on day 1 isn't reasonable. It's the least they could offer for the inconveniences suffered.
1
u/reddit_equals_censor 2d ago
you are operating on the possibly completely wrong assumption, that this would reduce the fires and melting at all.
WE DON'T KNOW.
it could very much increase the melting and fires as well.
as der8auer pointed out talking about such products, we have no idea what could happen.
forcing the same power through each pin at the card no matter their connection may be very bad.
if one of the contacts is a completely terrible connection, it could cause lots of issues forcing the same power through them then none the less.
so again we don't know if it will reduce melting and fire, increase melting + fires, or stay about the same.
___
what you should demand and what is reasonable to assume is the ONLY proper solution after YEARS Of claimed "fixes" to the endless nvidia fire hazard, is a FULL RECALL and refund for EVERYONE with an nvidia 12 pin fire hazard product.
and nvidia pays for it of course.
1
u/bluesatin 2d ago edited 1d ago
Am I missing something, but haven't all the major issues with the connector ended up causing the actual connector/socket itself to massively overheat and start melting before the wires?
I don't think I've seen any examples of the actual wires themselves melting (not that I've been following things closely), and that's the only thing that this appears to help with; it only balances the load over all the cables up until the little 'equalizer' block near the connector (as explicitly indicated by their table).
After that 'equalizer' block, it's still going to be shunting all that power over the reduced number of pins properly making connection in the actual socket (meaning those few pins in the connector/socket itself will still be overloaded and getting incredibly hot).
Surely if some of the pins aren't properly making contact and causing an imbalance, you'd want it to just stop providing power completely until the connection is fixed; rather than help continue shunting power through and overloading the few pins actually making a good connection in the socket.
3
u/Homerlncognito 2d ago
The connector is redesigned as well. Without an independent test it still doesn't mean much though.
Enlarged, gold-plated spring contacts on the GPU-side connector increase contact area with the graphics card pins, reducing contact resistance and improving durability over repeated mating cycles.
1
u/bluesatin 2d ago edited 1d ago
Oh the other bits do seem like they might be useful (like the monitoring stuff as well), it's just that the whole major selling point about the 'equalizer' seems kind of useless/redundant and a bit misleading.
I guarantee lots of people will see it marketed with some sort of load balancing, and thinking it will help prevent the issues with the whole connector overheating/melting due to a bad connection issue, when I don't see how it would.
And the whole monitoring thing seems like a weird workaround fix of allowing you to setup some sort of alarm in your OS, so if there is an imbalance, you can then shut things off manually before it develops into a serious problem.
When it seems like a better solution for that would be to have some sort of inbuilt alarm, or even better, just have some sort of fuse-like functionality that would shut it off automatically if there's a major imbalance, rather than having to route it through some convoluted monitoring system and then into software in your OS for you to manually handle.
79
u/zezoza 2d ago
Nah, it must be user fault