Don't know where I've read, but there's something that the battery is only capable to keep the data overnight, some 12 hours, that's not quite good.
for the purposes of a scratch area it appears this would help improve the overall performance of a machine, but the cost is a bit steep :( I was wondering though how it would perform in a 3 way test involving conventional hdd's & these new “hybrid” hdds
SpawnofSonic
for the purposes of a scratch area it appears this would help improve the overall performance of a machine, but the cost is a bit steep :( I was wondering though how it would perform in a 3 way test involving conventional hdd's & these new “hybrid” hdds
The hybrids use flash memory (IIRC) which is still a lot slower than RAM, but it would certainly give them an advantage over regular HDDs. However, I'm not sure whether our particular testing regime would show that.
I am getting really annoyed that the people who make this sort of hardware always put a limiting factor on it, by this I mean a PCI bus instead of a PCI-E bus, SATA150 instead of SATA300. Have hardware developers not heard of PCI express?!
On to the solution for backing up this drive there is a much easier way: ACRONIS (true image?). It comes with its own backing up schedule solution and so you could schedule a backup every 10hrs or so, a restore is equally as easy as backing aswell
I won't be getting this RAMdisk or any other RAMdisk until there are ones which use both SATA300 and PCI-E. Same as RAID card aswell tbh.
EDIT: To further illustrate my point
http://www.techpowerup.com/index.php?14941 . Correct me if I'm wrong but doesn't ATA have a maximum transfer speed of 133Mbps?
Its normally down to the cost of implmentation and r&d costs - i.e its cheaper to implement technology they know more about.
Yeah, David's right. Also they've used an FPGA… basically a chip they can program to do what they want. I wonder if it's actually quick enough to justify a SATA 300 PHY?
As for Acronis true image… have they made version 9 support x64 yet? I have a license but can't use the damn thing.
I've used this as a scratch disk for photoshop and for the swap file and it can make a huge difference. My friend works in television and 700mb pictures are not unheard of. When photoshop starts putting these on the scratch disk it can take 3-4+ minutes. On the iram its only a matter of seconds.
The computer has 3gb of ram already with the /3gb switch and there wasn't much more that could be done to speed photoshop up without spending a fortune.
Raid might have improved the harddisk throughput but I would have had to upgraded the case, power supply, cooling, raid controller and get fast hard disks. I tried Raid a year or so ago and was disapointed with the performance. I'm also wary of hard disk reliability.
For other people 4gb isn't really enough, the improvements are very application specific.
hmmm, I've never used acronis on x64 before so I can't help you there… Could you perhaps run Acronis on a network computer and make your disc a network drive or something like that?
About the R&D costs though, couldn't they just add a premium on the final product? It would make it soo much easier for motherboard makers if everything was one standard instead of having 10 different types of standards (PCI, PCI-E, PCI-X etc.) on the mainboard. I would pay more for a PCI-E of the XFX RAID card aswell as a lot of other PCI only products (sound cards, etc.)
scottfscottf
For other people 4gb isn't really enough, the improvements are very application specific.
I think that sums it up perfectly.
ExceededGoku: Not for the backups I want to do. Anyhoo it's a bit off topic from the i-RAM, so I'll grumble no more.
what does the i stand for?
Steve B
what does the i stand for?
Trend whore.
I've read a few other online reviews about the iRAM, most a good 6 months or so back when it was first released. Real world performance is disappointing and it really seems only suitable for those with a specific need.
I think in time this sort of technology could be commonplace with PCs shipping with the OS on a RAM based device and a hard drive being used for data storage and backup. It offers huge advantages in speed and more importantly reliability (assuming the backup works automatically (every time computer shuts down perhaps?).
i want one. but im not sure why. it just screams SPEED at you. of course its a new mobo/proccy/ram/gfx/hdd first…. so maybe someone will have a clear-cut use for it by then rather than just bragging rights?
non-photoshop etc user btw, i know its got some uses already. just not for the general pc user
The cost is definitely a huge deterrent. I mean, does it really need to use PC3200 RAM? Wouldn't this be a great way to put good old SDRAM lying around to good use?
GooglyMoogly
The cost is definitely a huge deterrent. I mean, does it really need to use PC3200 RAM? Wouldn't this be a great way to put good old SDRAM lying around to good use?
How many 1GB SDRAM sticks do you have, though? Capacity is an issue too.
maybe if i superglued all 9673 of my old sd-ram sticks together i might make a 1gig stick:)
Steve
How many 1GB SDRAM sticks do you have, though? Capacity is an issue too.
Sure, none hehe. But I probably have 8-10 128-256 MB sticks of PC100 and PC133 RAM lying around. 8 slots filled with 256 MB sticks would be a decent 2 GB after all… It'd be a big card, but perfectly possible. Heck, my PC3200 DDR is in a 4x256 configuration too, so…
does the pc just see it as a hard drive? (i'm assuming so since it just connects via SATA) -so you only need your SATA driver to install windows on it?
can do backups using an acronis disc and it'll restore really quickly if it ever does lose its memory
have a spare 4gb of ram (all the same 3200 corsair sticks) and really tempted to get this for my main os (my xp uses about 450-750mb), then i'll use my raid-0 raptors for program files
Yes bledd, it's just a SATA disk as far as the OS is concerned. The PCI connection is only really there for power.
i realised pci was just for power, just hadn't found any reviews that used it as a main os drive
with an nlited os it can fit on a 1gb stick no problem (especially if you move programfiles to another drive)
Haven't read the article yet, but has anyone run these in raid 0?
:) I am building a system for a customer and I thought this would be a substitute for a swap file drive I was going to add. This would make photo shop muy speedier no? :confused: they use graphic files that are about a gigabyte each and then print it out to a large format printer.
says the drive is
www.newegg.com/Product/CustratingReview.asp?item=N82E16815168001 144.00 on new egg
I am trying to save them a few bucks by using a motherboard that will still support their radeon 98 (agp card)
I picked a cpu based on what I thought I could budget from www23.tomshardware.com/cpu.html?modelx=33&model1=438&model2=465&chart=186"]this site.
so far I have a 130$ cpu on pricewatch
www.pricewatch.com/cpu/pentium_d_940.htm Intel PD 940/3.2/4M/800/775 Intel Pentium D 940 3.2GHz 4MB (2x2MB) Dual Core 800MHz Socket LGA 775 - OEM + FREE HEATSINK FAN-! CPU … Read More
this would be www.newegg.com/Product/Product.asp?item=N82E16813135196“]my motherboard
or more likely www.gigabyte.com.tw/Support/Motherboard/CPUSupport_Model.aspx?ClassValue=Motherboard&ProductID=2471&ProductName=GA-VM800PMC”]this one from gigabyte With a some matched corsair at 2 gb at whatever the highest speed, then add this ram card, use their old case, the radeon agp 9800 and nice 550 power box. Thats what I am thinking of. I might be making a mistake keeping the agp card, any comments? :undecided : timjordan at cheerful dot com
Yes.. but even faster is to put the RAM in the main memory so you don't have to swap in the first place.
For photoshop use I would have at least 3gb on the motherboard. Set the /3gb switch to allow photoshop to use 3gb of system ram.
http://www.microsoft.com/whdc/system/platform/server/PAE/PAEmem.mspxI would make the windows swap file a fixed sixe of 500mb on the iram and point the first photoshop scratch disk file to the iram card with the second photoshop scratch disk file to the fastest hard disk in the computer. This should give you the fastest speed in photoshop and other apps. You may have to lower the history states to around 4. Everytime you make a change in photoshop it will save a history state that could be upto the size of the file, so even 7gb of ram & fast storage can be eaten away pretty quickly.
Please excuse the bump:
the recent price drop in RAM has added some of my interest in i-RAM. I realise that the i-RAM reviewed only takes DDR RAM and we'll have to wait for the next gen device to take advantage of DDR2.
Although that made me wonder.. isn't the current design really inefficient? Forget DDR-2, isn't DDR RAM completely bottlenecked by the SATA/SATA-300 interface by 10 folds? So does that mean that i-RAM users should just get the cheapest memory they can, provided that it is compatible (and working)?
Lastly, this is more of a feedback - I would like to see some real world performance in reviews of this nature (OS boot/game loading time and other disk intensive applications we may encounter). Having looked at other reviews on this device, it seems that the use of i-RAM improve the performance over ‘reguar HD’ in many applications by a much smaller margin than the IO benchmarks may imply.
All,
I've read your posts, and it's sad that a lot of you (luckily, not all) make reviews or assumptions about products you've never touched or seen in action. Also, with every new product out there, there will always be those who want more and I'm one of those who “want more.” However, I do own an i-RAM, and I've tinkered with it in several ways.
Let me start off by saying it *is* blindingly fast, despite that some of you complian it not being SATA-II or DDR2 or whatever the latest and greatest technology is. The fact is, RAM is built upon random-access and hard drives are not. The mechanical aspect of hard drives slow them down except for sequential reads, period. They have to do millions of reads and writes, and those 8 millisecond seek times all add up. RAM, on the other hand, eliminates seek times altogether. File Fragmentation Worries are a thing of the past.
Just to give you an idea, my Windows XP installation typically boots up somewhere around 90 seconds from a SATA-I hard disk. This includes the power-up of the system, booting up to the GINA, loading the desktop, and waiting for that last program to finish executing. With the i-RAM as my primary drive, it completes this process in about 7 seconds. SEVEN. To the desktop, awaiting my next command. Most of that 7 seconds is POST and hardware initialization. (For the testing, I was utilizing Norton Ghost for imaging the drives).
I proceeded to test additional applications, including Microsoft Office 2003, and I put 32 applications in the Startup menu. (I had to remove a lot of unneeded files and reduce the size of the page file to do this test on 4 GB of storage, but on the plus-side Office 2003 took less than 2 minutes to install). I rebooted again, and it took about 1 additional second to load all the those applications AT THE SAME TIME. They loaded so fast, I could barely see each window pop open – literally blindingly fast. Could it be faster? Sure. Would the difference be humanly discernable? Probably not. Once you change your system disk to RAM, it's no longer a bottleneck, and you'll be spoiled to Hell and back.
As for the comments regarding the pagefile, it doesn't matter how much RAM you add to your system. The pagefile is a necessary evil. Why, you wonder? I did, too. It's NOT additional memory per se. It's part of memory management. Memory is requested by software/drivers/etc., and memory is wasted by those same programs because they usually request more than they really need (it's a programming thing). They all have the freedom to address 4 GB of RAM *regardless* of how much you really have installed. The pagefile permits this fixed addressing space to occur, and faults pages of unused or idle memory to this file. If you turn off the pagefile, you'll just be shooting yourself in the foot. Those programs will simply waste RAM and leave nothing for other programs to use, leading to inevitable “out of memory” error messages. The i-RAM is perfect for hosting your pagefile, along with Temporary Internet Files, TEMP folders, Photoshop scratch disks, P2P temp folders, and anything else that reads and writes a lot during processing.
Now, about wanting more: The improvement which is obviously in immediate desire is MORE storage. 4 GB isn't much, even for plain vanilla Windows XP, and unacceptable to Vista. 8 GB? Could still be quickly filled when including applications like Office, the pagefile, and the endless service packs and patches for Windows. 12-16 GB is preferred. Granted the hardware needs to be designed for 2 GB RAM chips and so does your wallet. But, should they go this route, and should you experience it, you'll realize why hard drives are becoming antiquated these days. Flash memory is free of seek times, but bandwidth is still not quite up there. You'd have to stripe about 8 flash devices to get close to SATA-I speeds. Good luck with that, especially budget-wise.
Next desire would be some sort of live backup. The included software allows periodic backups, but for an operating system it's not enough peace-of-mind. The battery keeps i-RAM alive even when power is disconnected from the PC. PCI slots are HOT even when the computer is off – as long as the power supply is also hot. The battery is NOT used until you physically unplug the power supply and the battery lasts several hours. Some sort of RAID-1 configuration would be helpful, perhaps a dedicated SATA or flash device to keep a mirror copy on in the event of catastrophic power loss. The disk could just be read back to RAM should this happen while operating in critical-mode, and soon you'd be on your merry lightspeed way. (Does anybody know if RAID-1 will always prioritize reads from the fastest disk? If so, any RAID-1 can be setup with the i-RAM as one disk and a mechanical drive as the second.) This mirror would be kept up-to-date in the background when i-RAM is not answering requests.
And, my last request for an update is an external case or bay-mounted box to put the card in, with a DC power supply. This way, I have the option to use a PCI slot or not.
Now, if you research a bit deeper, HyperOS Systems out of the UK has developed the HyperDrive 4 – which can now hold up to 32 GB of RAM, has SATA and PATA connectors, is a 5.25“ bay mounted RAM drive, and has an option for automatic backups to a direct-mounted flash or 2.5” drive. However, its price is enormous by comparison – approximately $4200 for a fully loaded 16 GB model, while the i-RAM cost me $380 fully loaded with 4 GB of RAM. Do the math: 4 x $380 = 16 GB @ $1,520. So, with a few spare PCI slots, you can have something worthy of your brand-new quintuple-core 8 GHz extreme processor with a 1566 Mhz FSB and 2 GB cache.
I'm planning to build a cheaper, secondary motherboard-in-a-box with power only (no CPU, no RAM) just to power 3 to 4 i-RAM drives and then use a SATA RAID controller in my primary computer to utilize them all, thus freeing up the space in my primary box and getting 16 GB of lightning-speed storage. The secondary box will be cooled, small, and sit next to (or on top of) my primary computer, with SATA cables running into my primary (or to external SATA connectors if I go that route). It still remains to be seen how the i-RAM might perform as part of a mirrored set with a mechanical disk. I imagine this depends highly on the RAID controller itself, but it's worth investigating.
I hope this information is helpful to all of you.
dijitul
This is indeed some good information on the I-Ram.
I don’t use one, but have read articles about it.
I have also heard that Gigabyte will be releasing a new I-RAM (Q1 2008).
It will be a BOX type, instead of PCI interface.
I’m not sure about the specs though.
Thx for the info :)
I'd like to see some real application test; encoding, gaming etc.
I'd like to mention the addition of another potential contender to this category of drives, one that has belayed my interest in purchasing more i-RAM components. Fusion IO has announced the upcoming release of their ioMemory-based ioDrive, a slot-mounted NAND-based flash drive that will sport up to 640 GB (yes, you read that right, GB) of storage at up to 800 MB/s speeds. They predict it to be approximately $30/GB when released next year, with several storage capacity models. See FisionIO's website for more information regarding compatibility and applications. Let me know if you happen to get a hold of one!
Well, that site can be found
here.
At $30/GB, their “entry-level” 80GB model works out to be $2400. At that price point it's prohibitive for most users, yet it's funny to see they list gaming as one of the beneficiaries. Definately a high-end product for high-end needs, or those with fat wallets.
Still, I'd be interested in seeing some performance tests :)
Hence, the term “bleeding-edge.” It stands to reason that higher performance goes along with higher cost. If someone wishes to buy a Lamborghini but is going to complain about the cost, they should instead go shopping for a Chevy and stop whining about how expensive Lamborghinis are or what else should be included for $100,000. You get what you pay for. Besides, prices always eventually come down, and/or features increase. The question is: Do you want to stand by and watch, or get in the game and play?
I know people who've paid over $1000 for a CPU. Just the chip, which is nothing without a motherboard, RAM, video card, etc. Whereas, there are CPUs available for $300. It's all about priorities.
P.S. Thanks for posting the link. I don't yet have the “privilege” to include URLs in my posts. Just a couple more to go…
I dont get why these drives are so expensive, 4Gb USB Flash drives are ~£20 these days, take away the USB interface and use a SATA300 one, stick a few together and your away…
'[GSV
Trig;1225885']I dont get why these drives are so expensive, 4Gb USB Flash drives are ~£20 these days, take away the USB interface and use a SATA300 one, stick a few together and your away…
I hear you, but it turns out not to be just that simple. The primary issues these days with flash drives (for operating systems) has been speed and life.
Consumer flash memory does not read/write as fast as consumer magnetic mediums like hard drives (minus seek times for this comparison), so to increase rates they have to create larger buses or pipes to write more data at the same time. Therefore, to get equivalent speeds of SCSI and SATA controllers, it takes multiple flash pipes. Those small cards you pop into your camera are not really high-performance when you compare them to hard drives.
Flash also has the issue of read/write life cycles, which modern high-end controllers extend by ensuring they write to every bit evenly (since fragmenting isn't much of a factor anymore). Deleting a file doesn't automatically mean that space will be immediately overwritten.
So, we want fast but cheap flash? Consider the amount of storage, multiplied by the number of data channels required, and the firmware for distributing the wear evenly, striping the information across flash, and then passing it via the interface to the CPU. It's no longer cheap. As an example, a performance PCIe 12-port SATA-II RAID controller connected to twelve SATA-to-FLASH adapters, each with 4 GB SanDisk Extreme IV compactflash cards attached would give you about 48 GB of storage at under 300 MB/sec write speeds. Just how much would something like this cost? If you figure it out, a 40 GB FusionIO drive for $1200 doesn't seems so expensive anymore.
Plus, these technologies, like all technologies, are getting cheaper to make. They will quickly become affordable. Bleeding-edge is bleeding-edge. There will always be something bigger and badder. No pain, no gain.
The complaints about the high cost of the HyperDrive 4 are
not totally unfounded.
Look at it this way: a Tier 1 motherboard company like
ASUS is now charging $200 to $300 for motherboards
with tons of sophisticated on-board technology,
only a subset – < 50% – of which deals with RAM and SATA
interfaces.
And, you can easily find large server motherboards
with 8 DIMM slots, as standard equipment.
Setting engineering costs aside – which would not be
all that much, given that more sophisticated motherboards
have already been designed and mass-produced –
why should a competitor to the HyperDrive
NOT retail somewhere between $100 and $200 (w/o RAM)
and still support 8 x DDR2 DIMM slots?
Is there a patent issue at stake here, perhaps?
And, as long as the connection is via a SATA bus,
exploiting widely available on-board RAID controllers
is a natural evolution of these i-RAM devices:
more speed is not achieved with more slots per bay,
but with more i-RAM devices running in parallel
on multiple SATA cables, because the SATA bus
is the real bottleneck with such a RAMDISK.
Thus, almost everyone who knows anything about
such technology now agrees that Gigabyte should
upgrade the 5.25“ i-RAM to support DDR2 DIMMs
and a 300 MB/second interface (at least) –
possibly add a jumper to upgrade the interface
to 600 MB/second when SATA-III is available.
Word on the street is that the 5.25” i-RAM
Project Manager quit to take a job with another
company.
Sincerely yours,
/s/ Paul Andrew Mitchell
Webmaster, Supreme Law Library
The complaint I had responded to was about the high prices of high-speed flash storage, not the RAM disk. The FusionIO drive falls into the category of flash because it uses a solid-state technology which doesn't require power to maintain storage.
I certainly agree the HyperDrive should be cheaper (eventually), but it's not and there are few other alternatives at this moment. Sure, there will be down the road, which just affirms that you always pay more for bleeding edge. Some people pay over $3000 for a cell phone ($600 + monthly service for two years). Why? Because it makes such a slight improvement in their life – and maybe they feel more cool.
Motherboards are manufactured by the millions, along with the chipsets and components those boards use. And, although motherboards are cheap, you still must buy a processor, RAM, drives, cables, cases, accessories, etc. You don't get a fully functioning system out-of-the-box, and by the time you do it's hundreds of dollars later. So I feel the motherboard comparison is a bit off-target in this particular case (regarding high-speed flash). Nobody has a template design to follow when it comes to engineering a hardware ramdisk with all the features we desire.
Let's turn this conversation around a bit: Why won't the *motherboard* manufacturers give us this option instead? Why can't we load a motherboard with 16 GB of ram (or 64 GB of flash) and designate in the BIOS to use whatever amount we want as a physical disk? Wouldn't THAT be the cheapest implementation? Motherboards have every capability except an onboard battery to power RAM. Since Gigabyte designs both motherboards AND ramdisks, why haven't they done this? Patents, maybe, or is it just lack of demand? RAMdisks typically have a very specific purpose in life, whereas I can see the FuisionIO disk actually having more flexibility, compatibility, and reliability. The objective here is to get RAM speed with harddisk storage. FusionIO claims to have achieved this, and at a reasonable cost (IMHO) if we consider ALL that it might do. It might just be vaporware like so many other ideas!
> Let's turn this conversation around a bit:
> Why won't the *motherboard* manufacturers give us this option instead?
Good point: I do specifically recall asking Intel's Amber Huffman
if Intel would consider adding a BIOS option to designate a region of
RAM as a RAMDISK – with a native device driver that emulates
a SATA/3G HDD. She replied that they had decided to go instead
with the flash disk cache concept aka “Robson Technology”.
Here was her written reply on 1/28/2007:
“The trend in the industry is towards using NAND for caches to speed disk access time.
The major advantage of NAND is that it is non-volatile so you don't increase risk of data loss
when using it as a cache. Intel's efforts in this space are referred to as ‘Robson’ and
you can find more info on Intel's website and via Google.”
I have also suggested to ASUS that they consider
dedicating a portion of the extra large number of
DIMM slots on large server motherboards
to a RAMDISK, but again this suggestion was
not met with any noticeable enthusiasm.
I should also clarify that I made my suggestion
to Intel's Amber Huffman BEFORE I discovered
the RamDisk Plus product from superspeed dot com :
this product works great, because it saves and restores
the contents of each RAMDISK between shutdowns and
startups without BIOS changes.
We configured a 512MB RAMDISK with that software and
moved the IE7 browser cache to that partition,
with MUCH SUCCESS!
But, RamDisk Plus would NOT be suitable for loading
Windows XP system software into such a partition;
that partition must be enabled by system software
that is launched after POST is completed and
thereafter emulates a Windows letter drive.
More to your point above, if standard PCI slots have
a 5-Volt “stand by” (“SB”) pin, there is no reason
why a subset of DIMM slots could not be
powered in the same manner, in order to
prevent the loss of otherwise volatile data.
There is also a recent IBM patent which situates
RAM in an external case, with a ribbon-style
cable that plugs into the motherboard's main
DIMM slots.
My motherboard analogy is consistent with the
fact that all modern PCI-E motherboards have
native support for Serial ATA hard disks,
and market leaders now have native support
for SATA/3G hard drives too. After stripping away
all of the other added features on modern
motherboards, we would be left with something
very close to the HyperDrive4 developed by
a firm in the UK.
Thinking out loud for a minute, here is a
concept which I think is worth exploring:
(1) begin with the assumption that all
system software is capable of using
64-bit addressing (the obvious future
e.g. XP x64);
(2) populate a motherboard with a
very large number of 2GB DIMM slots,
allowing perhaps 32GB of physical RAM
to be addressed in linear fashion,
in anticipation of 4GB modules in the
not-to-distant future (total of 64GB);
(3) the boot procedure reads a config file
from an SSD or DVD and literally formats a subset of RAM
“on the fly” – to host the C: system partition –
beginning at physical address zero; drive image
software like GHOST does this, BUT it writes
that image file to a hard drive partition now;
(4) then, the remaining RAM is made
available to the OS kernel and
“ring 0” OS database, as usual,
allowing the lower RAM subset
to operate exactly the same as if
it were an ultra-fast C: partition;
(5) for this concept to work best,
registered error-checking RAM would
be preferred, I would predict;
(6) as long as the motherboard remains
powered UP, the contents of the C: RAMDISK subset
would be preserved;
(7) in the event of any 2-bit errors
which could NOT be corrected by the
ECC logic built into the DIMMs,
a corrupted code page could be
“paged in” from a backup stored
on something like the SSD or DVD;
(8) if the motherboard is totally powered DOWN,
it returns to the state at which it started
immediately before the last startup.
I realize that I am jumping right in at a running OS,
and I am assuming that there would be a
very special, one-time “Setup” procedure
to get all these data files initialized e.g. SSD or DVD.
Perhaps recent virtualization logic could be exploited
effectively to “hide” that lower C: RAM subset
from the running version of the OS, thus “tricking”
the latter into treating that lower RAM subset
as a standard C: system partition.
A hardware-enforced “ring” system might be
applicable to such a design also.
Just thinking out loud here.
Sincerely yours,
/s/ Paul Andrew Mitchell
Webmaster, Supreme Law Library
Here's what I just sent off to Intel's Amber Huffman
and David Ray at ASUS:
[This is where a BIOS option would be most useful
i.e. to format the lower RAM subset for the C: system partition;
once formatted, this boot process would proceed
to load Windows system software from something like a
special image file and write it into that C: partition.
Once that task if finished, from that point forward
the boot process runs normally to completion.]
What do you think of this sequence?
(A) run XP x64 Setup normally to completion
with a stable set of system software i.e.
loading XP onto a hard drive partitioned
with drive letter C:;
(B) run Symantec GHOST and save a drive image
file of C: to a DVD or existing hard drive partition e.g. D:+;
(C) enhance the BIOS to perform 2 special functions,
which are analogous to the FLASH BIOS functions
now available on recent ASUS motherboards
e.g. EZ FLASH 2:
(i) format a user-defined subset of RAM
as an NTFS partition with drive letter C:,
beginning at physical memory address zero;
(ii) then, restore the drive image file to that C: partition;
(D) as soon as that restore task is finished,
save changes and exit the BIOS normally,
permitting the boot-up sequence to run to completion.
There are lots of implementation details to be addressed here,
one of the most important of which is that we must implement
the entire OS code and OS database so both are “relocatable”, because
we start to load XP at the next address AFTER the last
memory address assigned to the C: system partition
in this scheme INSTEAD OF starting to load XP at memory address zero.
But, isn't that what virtualization is doing automatically already?
Sincerely yours,
/s/ Paul Andrew Mitchell
Webmaster, Supreme Law Library