Có nên chạy crossfire rx 480

The RX 580, as we learned in the review process, isn’t all that different from its origins in the RX 480. The primary difference is in voltage and frequency afforded to the GPU proper, with other changes manifesting in maturation of the process over the past year of manufacturing. This means most optimizations are relegated to power (when idle – not under load) and frequency headroom. Gains on the new cards are not from anything fancy – just driving more power through under load.

Still, we were curious as to whether AMD’s drivers would permit cross-RX series multi-GPU. We decided to throw an MSI RX 580 Gaming X and MSI RX 480 Gaming X into a configuration to get things close, then see what’d happen.

The short of it is that this works. There is no explicit inhibitor built in to forbid users from running CrossFire with RX 400 and RX 500 series cards, as long as you’re doing 470/570 or 480/580. The GPU is the same, and frequency will just be matched to the slowest card, for the most part.

We think this will be a common use case, too. It makes sense: If you’re a current owner of an RX 480 and have been considering CrossFire (though we didn’t necessarily recommend it in previous content), the RX 580 will make the most sense for a secondary GPU. Well, primary, really – but you get the idea. The RX 400 series cards will see EOL and cease production in short order, if not already, which means that prices will stagnate and then skyrocket. That’s just what retailers do. Buying a 580, then, makes far more sense if dying for a CrossFire configuration, and you could even move the 580 to the top slot for best performance in single-GPU scenarios.

And there will be a lot of single-GPU scenarios. In our history over the past few years, there has not been a single time when GamersNexus has recommended either SLI or CrossFire for wide-reaching gaming use cases. It’s phenomenal in very specific, targeted games, but the additional work and compatibility / scaling issues at large just aren’t worth the hassle. It’s worth revisiting, though; we haven’t tested multi-GPU in 2017, and this marks our first go at it.

We’re using our standardized GPU benchmark platform for these tests, which we’ve also recently deployed for GTX 1080 Ti reviews and RX 580 & RX 570 reviews.

Approach

For this test, we’re configuring the RX 580 & 480 Gaming X 8GB cards in CrossFire in our Z270 test bench platform (defined further below). The RX 580 Gaming X takes top slot, but both cards will be limited by the slowest link – that’d be the 480 Gaming X, though not by that much. The main question was whether this would even work, and we quickly learned that the answer is “yes.” The next question was whether scaling was actually worth it, as opposed to instead purchasing a single, more powerful GPU, whether that’s 10-series or future Vega.

GPU Testing Methodology

For our benchmarks today, we’re using a fully rebuilt GPU test bench for 2017. This is our first full set of GPUs for the year, giving us an opportunity to move to an i7-7700K platform that’s clocked higher than our old GPU test bed. For all the excitement that comes with a new GPU test bench and a clean slate to work with, we also lose some information: Our old GPU tests are completely incomparable to these results due to a new set of numbers, completely new testing methodology, new game settings, and new games being tested with. DOOM, for instance, now has a new test methodology behind it. We’ve moved to Ultra graphics settings with 0xAA and async enabled, also dropping OpenGL entirely in favor of Vulkan + more Dx12 tests.

We’ve also automated a significant portion of our testing at this point, reducing manual workload in favor of greater focus on analytics.

Driver version 378.78 (press-ready drivers for 1080 Ti, provided by nVidia) was used for all nVidia devices. Version 17.10.1030-B8 was used for AMD (press drivers).

A separate bench is used for game performance and for thermal performance.

Thermal Test Bench

Our test methodology for the is largely parallel to our EVGA VRM final torture test that we published late last year. We use logging software to monitor the NTCs on EVGA’s ICX card, with our own calibrated thermocouples mounted to power components for non-ICX monitoring. Our thermocouples use an adhesive pad that is 1/100th of an inch thick, and does not interfere in any meaningful way with thermal transfer. The pad is a combination of polyimide and polymethylphenylsiloxane, and the thermocouple is a K-type hooked up to a logging meter. Calibration offsets are applied as necessary, with the exact same thermocouples used in the same spots for each test.

Torture testing used Kombustor's 'Furry Donut' testing, 3DMark, and a few games (to determine auto fan speeds under 'real' usage conditions, used later for noise level testing).

Our tests apply self-adhesive, 1/100th-inch thick (read: laser thin, does not cause "air gaps") K-type thermocouples directly to the rear-side of the PCB and to hotspot MOSFETs numbers 2 and 7 when counting from the bottom of the PCB. The thermocouples used are flat and are self-adhesive (from Omega), as recommended by thermal engineers in the industry -- including Bobby Kinstle of Corsair, whom we previously interviewed.

K-type thermocouples have a known range of approximately 2.2C. We calibrated our thermocouples by providing them an "ice bath," then providing them a boiling water bath. This provided us the information required to understand and adjust results appropriately.

Because we have concerns pertaining to thermal conductivity and impact of the thermocouple pad in its placement area, we selected the pads discussed above for uninterrupted performance of the cooler by the test equipment. Electrical conductivity is also a concern, as you don't want bare wire to cause an electrical short on the PCB. Fortunately, these thermocouples are not electrically conductive along the wire or placement pad, with the wire using a PTFE coating with a 30 AWG (~0.0100"⌀). The thermocouples are 914mm long and connect into our dual logging thermocouple readers, which then take second by second measurements of temperature. We also log ambient, and apply an ambient modifier where necessary to adjust test passes so that they are fair.

The response time of our thermocouples is 0.15s, with an accompanying resolution of 0.1C. The laminates arae fiberglass-reinforced polymer layers, with junction insulation comprised of polyimide and fiberglass. The thermocouples are rated for just under 200C, which is enough for any VRM testing (and if we go over that, something will probably blow, anyway).

To avoid EMI, we mostly guess-and-check placement of the thermocouples. EMI is caused by power plane PCBs and inductors. We were able to avoid electromagnetic interference by routing the thermocouple wiring right, toward the less populated half of the board, and then down. The cables exit the board near the PCI-e slot and avoid crossing inductors. This resulted in no observable/measurable EMI with regard to temperature readings.

We decided to deploy AIDA64 and GPU-Z to measure direct temperatures of the GPU and the CPU (becomes relevant during torture testing, when we dump the CPU radiator's heat straight into the VRM fan). In addition to this, logging of fan speeds, VID, vCore, and other aspects of power management were logged. We then use EVGA's custom Precision build to log the thermistor readings second by second, matched against and validated between our own thermocouples.

The primary test platform is detailed below:

Note also that we swap test benches for the GPU thermal testing, using instead our "red" bench with three case fans -- only one is connected (directed at CPU area) -- and an elevated standoff for the 120mm fat radiator cooler from Asetek (for the CPU) with Gentle Typhoon fan at max RPM. This is elevated out of airflow pathways for the GPU, and is irrelevant to testing -- but we're detailing it for our own notes in the future.

Game Bench 

BIOS settings include C-states completely disabled with the CPU locked to 4.5GHz at 1.32 vCore. Memory is at XMP1.

We communicated with both AMD and nVidia about the new titles on the bench, and gave each company the opportunity to ‘vote’ for a title they’d like to see us add. We figure this will help even out some of the game biases that exist. AMD doesn’t make a big showing today, but will soon. We are testing:

  • Ghost Recon: Wildlands (built-in bench, Very High; recommended by nVidia)
  • Sniper Elite 4 (High, Async, Dx12; recommended by AMD)
  • For Honor (Extreme, manual bench as built-in is unrealistically abusive)
  • Ashes of the Singularity (GPU-focused, High, Dx12)
  • DOOM (Vulkan, Ultra, 0xAA, Async)

Synthetics:

  • 3DMark FireStrike
  • 3DMark FireStrike Extreme
  • 3DMark FireStrike Ultra
  • 3DMark TimeSpy

For measurement tools, we’re using PresentMon for Dx12/Vulkan titles and FRAPS for Dx11 titles. OnPresent is the preferred output for us, which is then fed through our own script to calculate 1% low and 0.1% low metrics (defined here).

Power testing is taken at the wall. One case fan is connected, both SSDs, and the system is otherwise left in the "Game Bench" configuration.