|  
        
		    
		 
		 
	  
		Introduction 
		 
		At first glance the new NV17 based cards from NVIDIA, branded GeForce4 MX, are a puzzling addition to the consumer graphics arena.  It's widely known that the core is slower and less featured than NV20 (GeForce3) which is being gradually phased out. So why would NVIDIA introduce a product that's essentially replacing the displaced NV20 but is slower and doesn't support many of the features that made NV20 one of their most popular GPU's? 
 
		Looking at it from a technical standpoint, it doesn't make much sense.  NV17 is slower than NV20 yet in marketing terms, it's being promoted as being 'better' simply due to the GPU brand name being 1 unit more than GeForce3.  However to properly understand NV17 you need to look at it from a money making and marketing point of view. 
 
		It's my theory that NVIDIA needed decent product differentiation and NV20 is maybe a bit too close in features and performance to the new NV25 (GeForce4 Ti) core for comfort.  By contrast, NV17 is sufficiently different in performance and features than NV25 for NVIDIA to have all their bases covered.  With the 3 speed graded NV17 models and the 3 NV25's, NVIDIA have a wide range of GPU products to cover all their target market segments. 
 
		It's purely about getting the best spread of cost and performance onto the market and NV17 does that better than NV20.  NV20 (GeForce3) will still be around in limited numbers in some shape or form so we shouldn't forget about it but it's the new GeForce4 based products that are grabbing the headlines at the moment. 
 
		NV17 In Detail 
		 
		So before taking a look at this particular implementation of the GPU, lets take a closer look at the tech behind the GPU itself. 
 
		NV17 started life as the NV17M, a mobile GPU aimed at the performance laptop market.  While their new products began to take shape, the NV17M started to look more and more appealing as a consumer chipset in normal AGP cards.  So what's under the hood? 
 
		NVIDIA makes a lot of noise about the integrated features on this GPU so lets take a look at them. 
		 
		Firstly we have dual independant 350MHz RAMDAC's that can operate twin displays independently at high resolution with independents resolutions, colour depths and refresh rates on each display.  This technology is used in nView of which we'll cover later on. 
 
		The core also features dual TMDS transmitters for dual-DVI digital output, again with different resolutions on each display, entirely in the digital domain with the final analogue conversion coming at the monitor not on the card. 
 
		Thirdly they are touting the integrated TV display logic as the best in its class, offering output to television at up to 1024x768 resolution. 
 
		Lastly the like to make some noise about the VPE or Video Processing Engine which is basically the display logic that takes care of HDTV and DVD processing, on chip. 
 
		As far as raw processing power goes, in this incarnation in 440MX form, the GPU pushes these numbers. 
		 
		
		
			- 1.1 Billion Texels/Sec fill rate
 
			- 34 Million triangles per second
  
			- 6.4GB/Sec memory bandwidth
 
		 
		 
		In comparison, the GeForce3 Ti500 that also features in this review has 8GB/sec memory bandwidth and can push out 3.84 Billion AA Samples/Sec in stock configuration. 
 
		As well as the integrated features and raw performance figures, there are 3 technologies present on the GPU that are worth looking at before checking out the card itself. 
 
		nView is basically an amalgam of the integrated features we just saw including the dual TMDS and VPE.  It's the mechanism that NVIDIA uses for it's dual display technology and if you didn't think NVIDIA was serious before about capturing some market share from Matrox in this area, think again. 
 
		Like the NV25 GPU's, the NV17 also features implementations of a pair of technologies that first made their serious debut on the GeForce3.  The anti-aliasing engine has been reworked (more sample modes and improved performance) and rebadged Accuview.  Accuview is a multi-sampling implementation (multiple copies of the frame buffer combined to form your output image) of anti-aliasing. 
 
		The final technology we'll look at is the Lightspeed Memory Architecture II or LMA II.  LMA II is NVIDIA's take on improving effective memory bandwidth on cards based on their GPU's.  The LMA is a crossbar type memory controller that sits on-chip and interfaces with the high speed DDR memory on the cards (no SDR versions this time around, unlike GF2 MX).  It does a number of clever things such as optimising memory accesses by splitting them up into smaller chunks so that less bandwidth is wasted and more can be put to good use. 
 
		It also features similar techniques to the technology we first saw up close in ATi's original Radeon GPU.  The LMA II logic on the GPU can perform extremely quick operations on the Z-Buffer such as a fast Z-Buffer clear operation, Z-Buffer occlusion testing and Z-Buffer occlusion culling.  All these techniques increase the effective memory bandwidth available to the GPU and allow for some extra speed that wouldn't be there without the crossbar. 
 
		While the card features those enhancements over NV20, the GPU isn't a full DX8 class part.  It can't do all DX8 class rendering features in hardware and for that deficiency in the core, you lose performance.  We'll see that in our benchmarks.  To reiterate, the core is slower than NV20 (GeForce3) despite the brand name suggesting otherwise. 
 
		The Card Itself 
		 
		Now that we've done a bit of background on the technology behind the card, lets take a look at the card itself. 
 
		  
		 
		As we can see, the Abit Siluro GF4 MX faithfully follows the NVIDIA reference design in layout.  The card features a large silver GPU heatsink with active fan cooling built in giving the GPU some potential for decent overclocking.  The memory chips however aren't cooled either actively or passively. 
 
		Memory heatsinks add to the cost by a few dollars per unit and at this pricepoint they don't make much financial sense when one of the main goals is to keep final costs as low as possible.  While the memory chips generally run cool to the touch, when overclocking it's nice to have the peace of mind that heatsinks bring.  An aftermarket modification for the cost concious overclocker perhaps but outside the remit of this review sadly. 
 
		The rest of the card is standard with a single D-Sub VGA connector and a TV-Out connector, something that should be standard on all video cards. 
 
		Performance 
		 
		I'll take a look at card performance from a few angles.  Stock performance, overclocked performance and also against a GeForce3 Ti500 (using 22.40's) to highlight the performance delta with a GPU that's being marketed at least as worse. 
 
		As with recent Hexus reviews, I'll combine as much information as is possible onto the graphs so that you get all the information without trawling through a mountain of pictures. 
 
		As far as overclocking is concerned, I was able to overclock the card to a stable 300/495, up from 270/400 at default.  While the card would run higher (315/515 was possible for 3DMark), the clocks used were the maximum that would run all tests safely with no chance of locking up during prolonged usage.  Stable overclocks are obviously preferable to absolute on the limit clocks. 
 
		Before we dive into the graphs, a quick look at the test setup. 
		 
		
		
		- Abit NV7m mATX Socket A DDR Motherboard (nForce 420D chipset)
		
 - Unlocked AMD Athlon XP1500+ Processor (1.33GHz, 10 x 133)
		
 - 2 x 256Mb Samsung PC2700 DDR Memory Modules (CAS2)
		
 - Abit Siluro GeForce4 MX 64Mb
 
		- Gainward Ti550 GeForce3 Ti500 64Mb
		
 - Adaptec 39160 PCI SCSI Dual Channel U160 controller
		
 - 2 x 73Gb Seagate Cheetah U160 10,000rpm SCSI disks
		
 - Plextor 12/10/32S SCSI CDRW
		
 - Pioneer 6x Slot-load SCSI DVD
		
  
		
		- Windows XP Professional Build 2600.xpclient.010817-1148
 
		- DetonatorXP 27.42 and 22.40 NVIDIA drivers
 
		- Aquamark v2.3
 
		- Quake3 v1.30
 
		- 3DMark 2001 Professional Second Edition
 
		- Serious Sam: The Second Encounter Demo
 
		 
		 
		
      
 First up, as usual, 3DMark 
        2001SE. This is a decent overall system benchmark with certain game tests 
        placing emphasis on different parts of your system performance. In the 
        graph we'll see the effect that not being able to do all the pixel and 
        vertex shader functions in hardware has on the GeForce4 MX440 when compared 
        to the GeForce3 Ti550. 
         
          
         
        We can see quite clearly that the Abit doesn't have the processing muscle 
        to live with the Ti500 on the same host processor. The Abit with the NV17 
        under the hood is unable to run the Nature test in 3DMark 2001, a test 
        that can only be run on full DX8-class hardware (NV20, NV25 as far as 
        NVIDIA chips are concered). Without the benefit of being able to run the 
        Nature test, the score falls nearly 2000 points behind the stock clocked 
        Ti500. 
         
        Given that the Ti500 will have incremental leads in each game test plus 
        the advantage of a Nature assisted score, we can still draw meaningful 
        conclusions about the Abit. The card doesn't quite hit our preferred 6000 
        point mark that we use as a guage of decent graphics hardware performance, 
        even when overclocked. 
         
        Onto Quake3 Arena v1.30 for the next game test. Quake3 is mainly a test 
        of overall system throughput with graphics cards of this class but we 
        should still see some interesting scaling in the results. With Quake3 
        being an older engine, created before the advent of DirectX 8.0, it runs 
        incredibly on these cards. 
         
        Remember, every possible performance sapping rendering feature was enabled. 
        We used the four.dm_66 demo throughout and the results are from a 3 run 
        average. 
         
          
         
        At the highest resolution, 1600x1200 which you can see at the top of the 
        graph, the GeForce4 MX440 gets beaten handily by the GeForce3. The GeForce3's 
        higher fillrate and memory bandwidth are responsible for the increase 
        here. At the lower resolutions, the GeForce4 MX440 holds its own. Both 
        GPU's are overkill for the Quake3 engine at 1024x768 and offer extremely 
        comfortable levels of performance. 
         
        Next up, Aquamark. Aquamark is even more heavily dependent on the pixel 
        shader functions in DirectX 8.0 that 3DMark so we aren't expecting stellar 
        performance from the GeForce4 MX440 in the Abit card. 
         
          
         
        The benchmark is run at 1024x768 and failing to break 25fps, even when 
        overclocked isn't a showcase for the cards performance in games that use 
        the DX8 shader functions. The measly increase when overclocked is due 
        to the CPU being made to do all the work that the card cannot. Overclocking 
        the CPU would give a better result than increasing card clocks here. We 
        don't get anywhere near our 50fps target sadly. 
         
        Finally in our look at game performance we have Serious Sam 2: The Second 
        Encounter. I've used the publically available demo version of SS2 rather 
        than the full game so that you have something comparable if you wish to 
        run the benchmarks yourself. 
         
        I use the Valley of the Jaguar demo at the three test resolutions with 
        the Quality graphics preference setting. 
         
          
         
        In the high resolution test at 1600x1200, the Abit gets a hammering by 
        the GeForce3 Ti500 which simply has more fillrate and available memory 
        bandwidth and pull away by over twice the score the Abit can muster. 
         
        As we fall down the resolutions and performance increases, we see that 
        1024x768 is the cards sweet spot in the benchmark with greater close to 
        50fps on average over the 3 runs of the demo. 
         
        Dropping the graphics quality down would have an appreciable difference 
        on performance but it's nice to enable what rendering features you can. 
         
        Overclocking the card doesn't have much effect on performance which points 
        us to the fact that we might be CPU limited but this is more due to the 
        CPU having to do work that the GPU should maybe be doing, not because 
        the GPU is sitting idle. 
         
        Performance Conclusion 
         
        While performance is strong in Quake3, the only test in the group of 4 
        that doesn't stress a GeForce2 card unduly, never mind out test Abit GeForce4 
        MX, the performance in the other benchmarks leaves a lot to be desired. 
        The card simply doesn't have the features to do all that's required for 
        running the shader functions in hardware. With games coming out more and 
        more frequently that only run well on DX8 class accelerators without turning 
        off features or dropping resolution, we have to say that the performance 
        was disappointing. 
         
        However given the price point and the target markets of the card, can 
        we really expect stellar performance in everything? With the GeForce4 
        tag attached to these cards, I think we do have a right to be annoyed 
        at the slightly poor performance. They are slower than GeForce3's so GeForce3 
        MX is more appropriate as a GPU brand name. 
         
        Again however, for the price you'll pay for one, the performance is quite 
        good. These cards are cheap and as long as you don't expect too much from 
        them, they'll server you well. I can't recommend them except to those 
        on a strict, tight budget. Saving for a full DX8 part is well worth it 
        if you ask this reviewer. 
         
        Lastly before we jump to the final analysis, a quick word about the anti-aliasing. 
        We've seen that the performance of the card isn't exactly stellar. Therefore 
        any application of anti-aliasing would further decrease performance and 
        while we haven't looked at AA performance explicitly, don't expect it 
        to be of much use sadly.  
      Overall 
        Conclusion 
         
        These cards are definitely aimed at the budget concious user, much like 
        the GeForce2 Titanium cards were not so long ago. The Abit is a decent 
        performer considering what's under the hood and it did overclock very 
        well although being out first NV17 through the door, we have no other 
        NV17 based cards to compare it with just yet. 
         
        The TV-Out however was good quality and I had no problems watching a couple 
        of my Region 1 disks that don't work on my standalone player, using this 
        card. 
         
        Abit have done a good job with the cooling considering the price point 
        with only memory heatsinks being a glaring omission. 
         
        Overall, a good performer for the price but nothing special and not recommended. 
        Saving for a GeForce3, ATi Radeon 8500 or GeForce4 Ti based card, all 
        full DX8-class hardware remember, is what we recommend. We can only recommend 
        the Abit if you're a gamer on a budget. 
         
        Lastly, and I've saved this till the end because it's more than likely 
        just a problem with the review sample, the GPU fan was the noisiest I've 
        ever heard. It was quite easily the loudest component in my system and 
        it made a low frequency buzzing that annoyed me incredibly. Watch out 
        for it. 
         
        Pro's 
         
        The price 
        Good active cooling on the GPU 
        High quality TV-Out 
        Good overclocking range 
         
        Con's 
         
        Low performance compared to cards priced just a bit higher 
        Lack of full DX8 compliance 
        Loud cooling fan  
      
     |