Comparing FireGL T2 128 MB with Radeon 9600 64 MB
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
Comparing FireGL T2 128 MB with Radeon 9600 64 MB
Hello all,
The video chip seems to be a pivotal point of preference in purchasing (say that 5 times fast!) a T42. This post is devoted to clarify the details between these two chipsets.
The basic differences between the FireGL T2 128 MB and the Radeon 9600 64 MB are:
* the additional 64 MB on the former chipset (obviously)
* different driver sets (which results in memory for the T2 to be clocked slightly slower to the Radeon and additional registers being used for OpenGL rendering)
That's it. The core is the same. The circuitry is largely the same. The cooling system for these chips is a bit ambigious, since in T41s and T40s a noisier fan was used and in T42s people are reporting the better fan found in the T41ps.
BUT what does all this mean in the REAL WORLD?
1. what does that extra 64 MB benefit you?
2. what does that extra 64 MB disadvantage you?
3. can you use the drivers for one to get the benefits for the other?
My hunches/responses based on experience (widely open to improvement) has been:
for 1. if you plan on keeping the laptop for 3+ years or if you explicitly load large textures for specific work/games, get the most memory possible, i.e. 128 MB T2. (e.g. I've had to load very large texture maps for some rendering work on a three year old laptop with a 16 MB ATI laptop video chip, which was king of the hill in its day but choked a few months ago on this task.)
for 2. if heat and battery life is a concern, go with the 64 MB Radeon. It has the same core and processing performance as the T2 but less volatile memory means less power draw. Note that even though the ATI chipsets have PowerPlay (a sort of Speedstep for their chipsets) you may not want to feed all 128MBs of memory while being on battery power. More memory = more power draw = more heat. I wonder how big of a hit this is on a T42/T42p series?
for 3. most likely you can use the Radeon drivers on the Fire GL (the driver will probably not act on the OpenGL registers) but I am doubtful that the reverse is true. ATI is probably smart enough to disable OpenGL registers on the Radeon; else many would buy the non-OpenGL card and mod it to use hardware OpenGL driver support. Anyone have any experience with this little experiment?
The video chip seems to be a pivotal point of preference in purchasing (say that 5 times fast!) a T42. This post is devoted to clarify the details between these two chipsets.
The basic differences between the FireGL T2 128 MB and the Radeon 9600 64 MB are:
* the additional 64 MB on the former chipset (obviously)
* different driver sets (which results in memory for the T2 to be clocked slightly slower to the Radeon and additional registers being used for OpenGL rendering)
That's it. The core is the same. The circuitry is largely the same. The cooling system for these chips is a bit ambigious, since in T41s and T40s a noisier fan was used and in T42s people are reporting the better fan found in the T41ps.
BUT what does all this mean in the REAL WORLD?
1. what does that extra 64 MB benefit you?
2. what does that extra 64 MB disadvantage you?
3. can you use the drivers for one to get the benefits for the other?
My hunches/responses based on experience (widely open to improvement) has been:
for 1. if you plan on keeping the laptop for 3+ years or if you explicitly load large textures for specific work/games, get the most memory possible, i.e. 128 MB T2. (e.g. I've had to load very large texture maps for some rendering work on a three year old laptop with a 16 MB ATI laptop video chip, which was king of the hill in its day but choked a few months ago on this task.)
for 2. if heat and battery life is a concern, go with the 64 MB Radeon. It has the same core and processing performance as the T2 but less volatile memory means less power draw. Note that even though the ATI chipsets have PowerPlay (a sort of Speedstep for their chipsets) you may not want to feed all 128MBs of memory while being on battery power. More memory = more power draw = more heat. I wonder how big of a hit this is on a T42/T42p series?
for 3. most likely you can use the Radeon drivers on the Fire GL (the driver will probably not act on the OpenGL registers) but I am doubtful that the reverse is true. ATI is probably smart enough to disable OpenGL registers on the Radeon; else many would buy the non-OpenGL card and mod it to use hardware OpenGL driver support. Anyone have any experience with this little experiment?
Re: Comparing FireGL T2 128 MB with Radeon 9600 64 MB
i use autocad, solidworks, 3dsmax, illustrator, and photoshop so the FireGL/128 is a no-brainer. the extra vram speeds up on-screen rendering and allows higher resolution previews.1. what does that extra 64 MB benefit you?
i don't know how extra vram could be a detriment. if i could upgrade to 256 or 512mb, i would.2. what does that extra 64 MB disadvantage you?
ATI offers drivers tweaked for specific applications. i get excellent results with the solidworks- and 3dsmax-specific drivers. i do believe that they offer game-specific drivers on their site as well.3. can you use the drivers for one to get the benefits for the other?
-erik
ThinkStation P700 · C20 | ThinkPad P40 · 600
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
What cell battery are you using? And are you using the same battery type for both P and non-P machines?
awolfe63 wrote:I've actually been amazed at the difference in power. The 15" 42P seems to have 20% less battery life than the 15" 42 - some of that must be due to the 1600x1200 display - but the 14" 42P seems to have 15% less battery life than the SXGA+ 42 - so much of it must be the graphics chip.
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
Hmm, I can understand 5 to 10% but 20% less battery life just to have 64 MB and OpenGL hardware support is a toss-up. (unless of course you use software like Photoshop and Solidworks on a regular basis)
I guess I'm a bit taken by surprise by such a decrease by just the video chip's additional memory alone. I have to wonder if the fan and high speed hard drive are co-conspirators as well. Perhaps the T42 with Radeon 9600 doesn't spin as long or as often as the T42p's fan to keep its heat signature within acceptable parameters.
I would love to hear about the heat disspated in real life between those Thinkpads with Fire GL T2 128 MB and those with Radeon 9600 64 MB. AND also how long the batteries last in real life.
Does increased heat = decreased battery life?
I guess I'm a bit taken by surprise by such a decrease by just the video chip's additional memory alone. I have to wonder if the fan and high speed hard drive are co-conspirators as well. Perhaps the T42 with Radeon 9600 doesn't spin as long or as often as the T42p's fan to keep its heat signature within acceptable parameters.
I would love to hear about the heat disspated in real life between those Thinkpads with Fire GL T2 128 MB and those with Radeon 9600 64 MB. AND also how long the batteries last in real life.
Does increased heat = decreased battery life?
awolfe63 wrote:I'm just using the numbers in the TABOOK.
6 cell for the 2378-DYU and 2373-CYU
9 cell for the 2373-FVU and 2373-3UU
No way the extra memory can account for the power drain! This isn't an extra burner on a toaster over we're talking about! It's just an extra 64MB of SDRAM...the same old SDRAM that is in main memory, just clocked at 200 MHz instead of 166 MHz.
The absolute worse estimate (and I mean absolute worst) would be that the extra 64MB of video SDRAM uses about twice the power than 64MB of main memory would....but I don't see anybody holding back on buying that extra 512MB SODIMM to save power (although perhaps we should be??)
A more plausible source of difference is cooling...if it takes more fan usage to cool one GPU over the other, this could have a big effect on battery life. With the T41s, this was possible because the FireGL probably did generate more heat than the 9000. However, I would be surprised if the FireGL dissipates more heat than the 9600 as they're both essentially the same chip.
Personally, I doubt that the lifetime estimates in the TABOOK for the T42s are very accurate at all.
I agree that it would be very interesting to have a side-by-side comparison of 9600 vs. FireGL. But it would have to be with the 14.1" models, because the 15" UXGA screen draws 7.2W average, whereas the 15" SXGA+ draws only 6.6W...so results would be biased.
Mofongo
The absolute worse estimate (and I mean absolute worst) would be that the extra 64MB of video SDRAM uses about twice the power than 64MB of main memory would....but I don't see anybody holding back on buying that extra 512MB SODIMM to save power (although perhaps we should be??)
A more plausible source of difference is cooling...if it takes more fan usage to cool one GPU over the other, this could have a big effect on battery life. With the T41s, this was possible because the FireGL probably did generate more heat than the 9000. However, I would be surprised if the FireGL dissipates more heat than the 9600 as they're both essentially the same chip.
Personally, I doubt that the lifetime estimates in the TABOOK for the T42s are very accurate at all.
I agree that it would be very interesting to have a side-by-side comparison of 9600 vs. FireGL. But it would have to be with the 14.1" models, because the 15" UXGA screen draws 7.2W average, whereas the 15" SXGA+ draws only 6.6W...so results would be biased.
Mofongo
T42p 2379-DYU: 1.8 GHz Dothan, 15" Flexview UXGA, Bluetooth, IBM a/b/g, 80GB 5400RPM
If you can't beat your computer at chess, try kickboxing.
If you can't beat your computer at chess, try kickboxing.
fox_napier wrote:
I don't think the FireGL T2 128 MB (aka M11) has "additional registers" for OpenGL rendering. Given that it is a different but very similar chip, there could be differences in the hardware interface. I could be wrong about this, but it doesn't really matter. More importantly, there is no doubt that the Mobility 9600 (aka M10) supports OpenGL in hardware. The difference is in the drivers: the FireGL has OpenGl drivers optimized for robustnes and quality for 3d modeling apps like AutoCad and Maya, etc. The 9600 has OpenGl drivers optimized for games. Also I'm pretty sure that Photoshop does not use OpenGl.
If you care about using Longhorn in X years, you might want to get the 128MB - Microsoft has said that a 128MB VRAM DX9 card is the minimum spec.
and additional registers being used for OpenGL rendering
I don't mean to be an [censored], but I don't think these statements are accurate. Now mind you, I don't have a T42 or T42p in front of me, but I do know a bit about graphics hardware (I do a lot of graphics programming).but 20% less battery life just to have 64 MB and OpenGL hardware support is a toss-up. (unless of course you use software like Photoshop and Solidworks on a regular basis)
I don't think the FireGL T2 128 MB (aka M11) has "additional registers" for OpenGL rendering. Given that it is a different but very similar chip, there could be differences in the hardware interface. I could be wrong about this, but it doesn't really matter. More importantly, there is no doubt that the Mobility 9600 (aka M10) supports OpenGL in hardware. The difference is in the drivers: the FireGL has OpenGl drivers optimized for robustnes and quality for 3d modeling apps like AutoCad and Maya, etc. The 9600 has OpenGl drivers optimized for games. Also I'm pretty sure that Photoshop does not use OpenGl.
If you care about using Longhorn in X years, you might want to get the 128MB - Microsoft has said that a 128MB VRAM DX9 card is the minimum spec.
-
JaimitoBond
- Sophomore Member
- Posts: 165
- Joined: Sat Apr 24, 2004 12:50 pm
FireGL is also a M10.thinkpod wrote:fox_napier wrote:and additional registers being used for OpenGL renderingI don't mean to be an [censored], but I don't think these statements are accurate. Now mind you, I don't have a T42 or T42p in front of me, but I do know a bit about graphics hardware (I do a lot of graphics programming).but 20% less battery life just to have 64 MB and OpenGL hardware support is a toss-up. (unless of course you use software like Photoshop and Solidworks on a regular basis)
I don't think the FireGL T2 128 MB (aka M11) has "additional registers" for OpenGL rendering. Given that it is a different but very similar chip, there could be differences in the hardware interface. I could be wrong about this, but it doesn't really matter. More importantly, there is no doubt that the Mobility 9600 (aka M10) supports OpenGL in hardware. The difference is in the drivers: the FireGL has OpenGl drivers optimized for robustnes and quality for 3d modeling apps like AutoCad and Maya, etc. The 9600 has OpenGl drivers optimized for games. Also I'm pretty sure that Photoshop does not use OpenGl.
If you care about using Longhorn in X years, you might want to get the 128MB - Microsoft has said that a 128MB VRAM DX9 card is the minimum spec.
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
Agreed about the real world test with identical 14.1" T42(p)s that only differ by the video configuration.
And I was hoping that an experienced graphics person like thinkpod posted addressing the technical details. But then I'm wondering if both have OpenGL support, what's the point of marketing them differently? Is the 9600's OpenGL support faster but less stable? Conversely is the Fire GL T2's OpenGL support more stable but *slower*? Wouldn't you want to render faster in Solidworks rather than slower? Or is it that you want to want to render more accurately and more with more stability?
BTW, the Fire GL T2 128 is called M10 as well, more specifically M10 GL-128. Is M11 the Radeon 9700?
And I was hoping that an experienced graphics person like thinkpod posted addressing the technical details. But then I'm wondering if both have OpenGL support, what's the point of marketing them differently? Is the 9600's OpenGL support faster but less stable? Conversely is the Fire GL T2's OpenGL support more stable but *slower*? Wouldn't you want to render faster in Solidworks rather than slower? Or is it that you want to want to render more accurately and more with more stability?
BTW, the Fire GL T2 128 is called M10 as well, more specifically M10 GL-128. Is M11 the Radeon 9700?
Well - here is some actual data for the 14" units
The battery test is at
http://www.pc.ibm.com/ww/thinkpad/think ... y_life.pdf
The 9 cell battery is 72WH - so the T42P is pulling ~10.7W
The 6 cell battery is 48WH - so the T42 is pulling ~10.25W
About a 4% difference. Who knows where the TABOOK data comes from.
The battery test is at
http://www.pc.ibm.com/ww/thinkpad/think ... y_life.pdf
The 9 cell battery is 72WH - so the T42P is pulling ~10.7W
The 6 cell battery is 48WH - so the T42 is pulling ~10.25W
About a 4% difference. Who knows where the TABOOK data comes from.
Andrew Wolfe
JaimitoBond wrote:
fox_napier wrote:
fox_napier wrote:
My mistake - you sir, are correct.FireGL is also a M10.
fox_napier wrote:
Yes.Is M11 the Radeon 9700?
fox_napier wrote:
There's a bunch of different aspects to this. One is that there is no "faster" - there's only faster for a particular use. Optimizing for one use often de-optimizes for another. Now, consumer oriented OpenGL drivers are agressively optimized for games - where aggresively gets into territory where there might be image quality reductions. This can be due to just being a speed/quality tradeoff, or due to bugs introduced by optimizations, or due to cheating (an unfortunate aspect of benchmark driven driver developement). And finally there's historically been a marketing driven aspect to the pro 3d / consumer 3d differentiation- even when the hardware is identical, the consumer drivers might not include features like AA lines.But then I'm wondering if both have OpenGL support, what's the point of marketing them differently? Is the 9600's OpenGL support faster but less stable? Conversely is the Fire GL T2's OpenGL support more stable but *slower*? Wouldn't you want to render faster in Solidworks rather than slower? Or is it that you want to want to render more accurately and more with more stability?
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
Here's some interesting information for those considering using a T42 with Longhorn. Check out the slides on this page:
http://www.extremetech.com/article2/0,1 ... 261,00.asp
Expecially considering the last slide, "Minimum AGP 8x or PCI Express," it doesn't look like the T42 qualifies. Wonder how set in stone these requirements are....
Also, I think this is the "Tier 2 User Experience" in Longhorn, where there's fancy anti-aliasing and pixel shading every which way you look. I'm guessing there's a Tier 1 experience, a la graphics lite Longhorn.
http://www.extremetech.com/article2/0,1 ... 261,00.asp
Expecially considering the last slide, "Minimum AGP 8x or PCI Express," it doesn't look like the T42 qualifies. Wonder how set in stone these requirements are....
Also, I think this is the "Tier 2 User Experience" in Longhorn, where there's fancy anti-aliasing and pixel shading every which way you look. I'm guessing there's a Tier 1 experience, a la graphics lite Longhorn.
Last edited by fox_napier on Wed May 26, 2004 12:01 am, edited 1 time in total.
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
Here's a generic link on video memory. I was wondering the amount of video memory impacts real world work:
http://www.m-techlaptops.com/video_memory.htm
It looks like the figures provided are just to sustain a given resolution at a given bit depth. I'm guessing if there is extra video memory, it is used to store application data, like texture maps.
I had a hunch but now see that if there isn't enough video memory, it overflows and shares main memory. (I used to have a 1600x1200 screen I used to run at 32 bit color depth on a 16MB system. According to the link above, my video memory along should not have been able to handle it.)
http://www.m-techlaptops.com/video_memory.htm
It looks like the figures provided are just to sustain a given resolution at a given bit depth. I'm guessing if there is extra video memory, it is used to store application data, like texture maps.
I had a hunch but now see that if there isn't enough video memory, it overflows and shares main memory. (I used to have a 1600x1200 screen I used to run at 32 bit color depth on a 16MB system. According to the link above, my video memory along should not have been able to handle it.)
DVI pass thru with 9600 Radeon?
Does the T42 wtih the 9600 Radeon support DVI pass thru using the port replicator???
In the T41 series, you HAD to get the "p" workstation model for DVI pass thru. Any changes here?
In the T41 series, you HAD to get the "p" workstation model for DVI pass thru. Any changes here?
Re: DVI pass thru with 9600 Radeon?
Yes, you can only get DVI trough the port replicator. There is no DVI port on workstation models either. Regular T40/41/42 also work fine with the DVI pass through on the port replicator.
Regards,
G-Man
Regards,
G-Man
The purpose of offering both a consumer version (9600) and a professional version (T2) of a graphics chip is of course to be able to charge the professional users (who supposedly are less price sensitive) more. But to really manufacture two different versions would be cost prohibitive, so the manufacturers (both ATI and NVIDIA) use various "tricks" to be able to charge two different prices for the same chip.
The basic principle is to disable hardware acceleration of some feature that professional applications use a lot but that games rarely use. NVIDIA, for example, uses wire frame performance as the differentiator (the consumer chip has very slow software based wire frame performance, whereas the professional chip takes advantage of hardware acceleration). If nothing else, this tactic results in that cards based on the consumer chip get really poor results in some popular professional benchmarks (Viewperf), thus steering professional users away from the cheaper cards.
How NVIDIA disables features is fairly well known (I have found several articles about it on the web), but pretty much nothing (to my knowledge) has been published on how ATI does it.
NVIDIA started out by making the chip version hardware selectable (strapping a pin high made it one type whereas strapping it low made it the other type). Of course it did not take long before the community discovered how a little work with the soldering iron could multiply the value of a graphics card, so these days they do the disabling in the bonding (chip packaging) stage instead (some connections on the chip just aren't brought out to the pins when the consumer version is packaged).
What little that has been published about how ATI differentiates the two versions of the M10 leads me to suspect that there is no difference at all (apart from the package marking and the price). But there may be some bin sorting going on, i.e. ATI always attempts to make a T2, but if some OpenGL acceleration hardware fails the chip test, the chip is still sellable as a 9600. Of course, if the process works well, many more T2s get manufactured than 9600s, whereas the demand is the opposite. So many chips that would qualify as T2s get marked as 9600s anyway.
If this assumption is true, the only difference between a T2 and a 9600 could be that the T2 is guaranteed to have certain OpenGL hardware acceleration features working, whereas for a 9600 it is a matter of luck if it is working or not.
Further, if this assumption is true, it should be possible to use the professional drivers with the consumer chip. Has anyone tried this?
Or, is anyone aware of some mechanism that ATI is using to prevent the professional drivers to work with the 9600?
The basic principle is to disable hardware acceleration of some feature that professional applications use a lot but that games rarely use. NVIDIA, for example, uses wire frame performance as the differentiator (the consumer chip has very slow software based wire frame performance, whereas the professional chip takes advantage of hardware acceleration). If nothing else, this tactic results in that cards based on the consumer chip get really poor results in some popular professional benchmarks (Viewperf), thus steering professional users away from the cheaper cards.
How NVIDIA disables features is fairly well known (I have found several articles about it on the web), but pretty much nothing (to my knowledge) has been published on how ATI does it.
NVIDIA started out by making the chip version hardware selectable (strapping a pin high made it one type whereas strapping it low made it the other type). Of course it did not take long before the community discovered how a little work with the soldering iron could multiply the value of a graphics card, so these days they do the disabling in the bonding (chip packaging) stage instead (some connections on the chip just aren't brought out to the pins when the consumer version is packaged).
What little that has been published about how ATI differentiates the two versions of the M10 leads me to suspect that there is no difference at all (apart from the package marking and the price). But there may be some bin sorting going on, i.e. ATI always attempts to make a T2, but if some OpenGL acceleration hardware fails the chip test, the chip is still sellable as a 9600. Of course, if the process works well, many more T2s get manufactured than 9600s, whereas the demand is the opposite. So many chips that would qualify as T2s get marked as 9600s anyway.
If this assumption is true, the only difference between a T2 and a 9600 could be that the T2 is guaranteed to have certain OpenGL hardware acceleration features working, whereas for a 9600 it is a matter of luck if it is working or not.
Further, if this assumption is true, it should be possible to use the professional drivers with the consumer chip. Has anyone tried this?
Or, is anyone aware of some mechanism that ATI is using to prevent the professional drivers to work with the 9600?
Last edited by bert on Mon May 31, 2004 2:55 am, edited 3 times in total.
fox_napier wrote:Here's a generic link on video memory. I was wondering the amount of video memory impacts real world work:
http://www.m-techlaptops.com/video_memory.htm
It looks like the figures provided are just to sustain a given resolution at a given bit depth. I'm guessing if there is extra video memory, it is used to store application data, like texture maps.
I had a hunch but now see that if there isn't enough video memory, it overflows and shares main memory. (I used to have a 1600x1200 screen I used to run at 32 bit color depth on a 16MB system. According to the link above, my video memory along should not have been able to handle it.)
I have no idea why these are posted - they are wrong by a factor of 10. 1600x1200 32bpp requires 7.6MB - it runs fine on an 8MB graphics card. Sometimes 3D can require 3x that much for Z buffer + double buffering. 3D cards use the rest for textures, compositing, anti-aliasing, display lists, etc.
Andrew Wolfe
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
Yes, after doing the calculations myself using this other page as reference:
http://www.hardwarecentral.com/hardware ... rials/68/1
That previous page I linked looked rather inflated.
So, you're saying that "textures, compositing, anti-aliasing, display lists" take up the rest of the memory.
I think someone else remarked that Photoshop doesn't take advantage of this excess memory. So is it just for games and rendering in 3D/modeling? How about non-linear video editing? Does that benefit from more video memory, or is that more dependent on a fast video chipset/processor?
Also, do you think we'd ever see say +512MB video cards? What could they be used for, besides rendering? Seems like a very small, niche market of professionals.
http://www.hardwarecentral.com/hardware ... rials/68/1
That previous page I linked looked rather inflated.
So, you're saying that "textures, compositing, anti-aliasing, display lists" take up the rest of the memory.
I think someone else remarked that Photoshop doesn't take advantage of this excess memory. So is it just for games and rendering in 3D/modeling? How about non-linear video editing? Does that benefit from more video memory, or is that more dependent on a fast video chipset/processor?
Also, do you think we'd ever see say +512MB video cards? What could they be used for, besides rendering? Seems like a very small, niche market of professionals.
-
fox_napier
- Freshman Member
- Posts: 119
- Joined: Tue Apr 27, 2004 9:22 am
- Location: Durham, NC
After looking around some more I found a more recent article:
http://www.dansdata.com/gz014.htm
It answered a lot of my "why the extra memory" questions.
http://www.dansdata.com/gz014.htm
It answered a lot of my "why the extra memory" questions.
benz wrote:Wow, that must have been written long ago, judging by this:
Extreme low-end cards often have only 256 KB, while very high-end cards have as much as 8 MB of memory or more. The standard for the run-of-the-mill SVGA card is 1 or 2 MB.
In general - the extra video RAM is only available to the 3D engine - so it is only used by 3D apps. Video rendering, photoshop, etc all use system memory. (In a former life I was CTO of a large graphics chip company - for a while I managed the design of the graphics chip in the T20/T21/T22/T23)fox_napier wrote:Yes, after doing the calculations myself using this other page as reference:
http://www.hardwarecentral.com/hardware ... rials/68/1
That previous page I linked looked rather inflated.
So, you're saying that "textures, compositing, anti-aliasing, display lists" take up the rest of the memory.
I think someone else remarked that Photoshop doesn't take advantage of this excess memory. So is it just for games and rendering in 3D/modeling? How about non-linear video editing? Does that benefit from more video memory, or is that more dependent on a fast video chipset/processor?
Also, do you think we'd ever see say +512MB video cards? What could they be used for, besides rendering? Seems like a very small, niche market of professionals.
512MB memory cards are useful to 3 segments.
1) 3D graphics and CAD professionals
2) Over-hormoned teenagers who like to play 3D games
3) Everyone else who likes to act like an over-hormoned teenager
All in all - that's a lot of people.
Andrew Wolfe
Hmm, so to wrap it up, for "the rest of us", the 128MB FireGL T2 card is overkill...awolfe63 wrote:512MB memory cards are useful to 3 segments.
1) 3D graphics and CAD professionals
2) Over-hormoned teenagers who like to play 3D games
3) Everyone else who likes to act like an over-hormoned teenager
All in all - that's a lot of people.
at best it's not much more than a waste of $$,
at worst, it will produce more heat and consume (slightly) more battery power, whilst not yielding any tangible benefit - oh, and there seems to be the "driver" issue which could mean that you could actually end up with decreased performance in certain scenarios.
One of the symptoms I have seen in the past when you have supposed higher edge but slightly less mainstream hardware is that once it is replaced by next year's model, the vendor/community will devote less attention to the support of such models. The fact that you can apparently use the mainstream driver with the T2 is good news in that respect...
Maybe that's a bit of a quick conclusion but it's actually been mine for a while...
Now, I wish IBM would produce some Txx mahines with fast CPUs, fast HDs and "normal" GPUs. To me, this translates to the 2373-KTU with a Radeon 9600 instead of a T2 but that machine simply doesn't exist (yet?)...
Oh well, for the time being, I had to take the plunge and ordered a KTU
-
lilserenity
- Junior Member

- Posts: 335
- Joined: Mon May 24, 2004 4:24 pm
- Location: Brighton/Worthing
- Contact:
I have to say this weekend I have been doing some CG art on my T23 (16MB SuperSavage IX) and it has performed admirably well, just as well as it did on the 16MB ATI All in Wonder 128 card I had in a compaq machine (it was something like the Rage128Pro chip, it was out of date when I bought it
).
The only lag came from Photoshop putting intensive demand on the swap file, but it was easily bearable and hardly effected matters.
I'd be most interested to see how the ATI Radeon 9000 performs in my T40 when I get it (32MB VRAM).
However in a past life I managed on the Amiga 1200 (and before that the A3000 with ECS) with AGA, a dogged slow chipset at 8bit in 640x480.
Hardly finished but this is the result of 12 hours work so far:
http://www.lilserenity.dynalias.com/thu ... colour.jpg
(Pencil outline, scanned in and photoshopped, long way to go yet...)
Vicky xx
The only lag came from Photoshop putting intensive demand on the swap file, but it was easily bearable and hardly effected matters.
I'd be most interested to see how the ATI Radeon 9000 performs in my T40 when I get it (32MB VRAM).
However in a past life I managed on the Amiga 1200 (and before that the A3000 with ECS) with AGA, a dogged slow chipset at 8bit in 640x480.
Hardly finished but this is the result of 12 hours work so far:
http://www.lilserenity.dynalias.com/thu ... colour.jpg
(Pencil outline, scanned in and photoshopped, long way to go yet...)
Vicky xx
Hmm, so to wrap it up, for "the rest of us", the 128MB FireGL T2 card is overkill...
at best it's not much more than a waste of $$,
at worst, it will produce more heat and consume (slightly) more battery power, whilst not yielding any tangible benefit - oh, and there seems to be the "driver" issue which could mean that you could actually end up with decreased performance in certain scenarios.
But you do get to tell everyone that yours is bigger
Now, I wish IBM would produce some Txx mahines with fast CPUs, fast HDs and "normal" GPUs. To me, this translates to the 2373-KTU with a Radeon 9600 instead of a T2 but that machine simply doesn't exist (yet?)...
I agree - that would be my first choice as well. (Or maybe a KXU with a Radeon 9600 - I can't decide what screen size to get either...)
Andrew Wolfe
-
- Similar Topics
- Replies
- Views
- Last post
-
-
FS:AMD Radeon R9 nano, for itx dekstops/ VR ready $300 free shipping
by RMSMajestic » Fri Dec 30, 2016 2:25 pm » in Marketplace - Forum Members only - 0 Replies
- 295 Views
-
Last post by RMSMajestic
Fri Dec 30, 2016 2:25 pm
-
-
- 3 Replies
- 519 Views
-
Last post by luca9903
Mon Jun 12, 2017 7:02 am
Who is online
Users browsing this forum: No registered users and 3 guests






