<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xml:base="http://thomas.kiehnefamily.us"  xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
 <title>infoSpace - Technology</title>
 <link>http://thomas.kiehnefamily.us/taxonomy/term/8/0</link>
 <description>Musings on technology, science, and literature concerning the same.</description>
 <language>en</language>
<item>
 <title>Digital Storage Update 2007</title>
 <link>http://thomas.kiehnefamily.us/digital_storage_update_2007</link>
 <description>&lt;p&gt;It has been well over a year since my last &lt;a href=&quot;new_mass_storage_technology_and_research&quot;&gt;digital storage update&lt;/a&gt;, and though there has not been any earthshaking new technology announced within that time, there has nevertheless been some advancement in several areas that I would like to address.&lt;/p&gt;
&lt;!--break--&gt;&lt;!--break--&gt;&lt;p&gt;&lt;b&gt;Vertical / Perpendicular Drives&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;One of the major highlights of the last year has been the introduction of so-called vertical or perpendicular drive technology.  Vertical recording aligns the data bits in a vertical, or perpendicular, format with respect to the plane of the the storage media, instead of the traditional horizontal arrangement.  Vertical techniques are already in use and have significantly increased storage densities, particularly for compact notebook drives (see &lt;a href=&quot;http://www.wired.com/news/wireservice/0,70024-0.html&quot;&gt;Wired: Hard Drives Get Vertical Boost&lt;/a&gt;).  &lt;/p&gt;
&lt;p&gt;&lt;b&gt;New Media Manipulation Techniques&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Working around the superparamagnetic limit by recording data perpendicular to the plane of the media is expected to peak at a data density of about 1TB per square inch.  Seagate is looking to extend this gain by combining it with other technologies, to the extent that we could see data densities of 50TB per square inch within 10 years.  A technique called HAMR (heat-assisted magnetic recording) uses lasers to heat up the disk surface while writing, which later cools to a more stable state.  The heat expansion exposes fewer individual grains of disc material to the write process, thus increasing data density.  This process is further refined by organizing the grains into a more regular pattern in a process balled bit patterning, where a chemically encoded molecular pattern is infused into the substrate during creation.  The combination of these techniques with vertical recording yields a bit of data per grain of magnetic substrate, compared to about one bit per 50 grains that we see now (see &lt;a href=&quot;http://www.wired.com/news/technology/0,72387-0.html&quot;&gt;Wired: Inside Seagate&#039;s R&amp;amp;D Labs&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Hybrid Drives and Solid State Storage&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;While some manufacturers continue to push for higher data densities, others have improved devices in different  ways.  Hybrid drives have been developed that combine solid-state flash memory and conventional magnetic discs to increase speed and reliability.  Though this development does little to increase storage capacities, it does help with reducing power consumption and portends the elimination of moving parts -- and the corresponding risk of mechanical failure -- as flash memory increases in capacity (see: &lt;a href=&quot;http://www.pcmag.com/article2/0,1895,1973122,00.asp&quot;&gt;PC Magazine: Seagate Launches First Hybrid Hard Drive&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Speaking of Flash memory, Freescale has improved on the concept by introducing MRAM (magnetoresistive random-access memory).  MRAM boasts faster read/write speeds and better stability than current Flash memory while still holding data after power has been removed from the chip.  This technology improves on the upper limit of the lifespan of Flash memory (see: &lt;a href=&quot;http://news.bbc.co.uk/2/hi/technology/5164110.stm&quot;&gt;BBC: &#039;Magnetic memory&#039; chip unveiled&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;As if MRAM and Flash are not enough, research is continuing on “phase change” memory that promises more stable storage than Flash memory at as much as 500 times the speed.  In addition to faster and more stable storage, phase change chips promise to be much more compact.  Initial prototypes of phase change chips have already been introduced by Samsung, and there will likely be production models out within a couple of years (see: &lt;a href=&quot;http://online.wsj.com/public/article/SB116580685002446215-4Hx7rrKLHcz7OHLOMyKOqi0aXlk_20061218.html&quot;&gt;Wall Street Journal: Disk Drives Face Challenge If New Chip Comes to Market&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Again, these solid-state technologies do little to increase storage capacities, but improve stability and power consumption, and thus, offer more efficient and stable overall storage and retrieval systems.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Optical Storage&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;HD-DVD and Blu-Ray media are set to multiply their storage capacities by adding additional layers and increasing the data density per layer.  At base specifications, 10 layers on an HD DVD would yield 150GB, assuming 15GB per layer. For Blu-ray, the total over 10 layers jumps to 250GB, assuming the base 25GB per layer.  These extra layers are not supported by current readers, but the concept indicates a potentially longer lifespan for standards that initially seemed to be dead on arrival.  (see: &lt;a href=&quot;http://www.dailytech.com/article.aspx?newsid=5656&quot;&gt;Daily Tech: Three HD Layers Today, Ten Tomorrow&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Meanwhile, much of the HD-DVD vs Blu-Ray debate has been thwarted by the announcement of a hybrid disc capable of storing data in both formats on one disc.  Warner Brothers recently unveiled Total HD Disc, which eschews a standard format DVD layer in order to bundle the two competing HD formats into one disc playable in either type of HD player.  This approach is contrasted by the introduction of dual players which have both HD-DVD and Blu-Ray capabilities (see: &lt;a href=&quot;http://www.nytimes.com/2007/01/04/technology/04video.html?ex=1169701200&amp;amp;en=6d726c6a23497a50&amp;amp;ei=5070&quot;&gt;New York Times: New Disc May Sway DVD Wars&lt;/a&gt;). &lt;/p&gt;
&lt;p&gt;Even at the higher recording capacities imbued by multiple layers, neither standard will approach the capacities of the terabyte holographic discs that I reported on last year.  Given the massive marketing effort behind HD-DVD and Blu-Ray, and the emphasis on applications in entertainment as opposed to mass storage, I have doubts over commercial manufacturers dumping these in favor of holographic media any time soon.   The most likely effect of the market battles over the two dominant HD formats is that newer, higher capacity formats will come at a premium for those seeking to implement high capacity data storage solutions.  Ars Technica suggests that smaller capacity formats will be exploited first in order to decrease the cost to end users and hasten adoption.  But even such decreased capacities are expected to be greater than even the multi-layer HD-DVD and Blu-Ray concepts discussed above (see: &lt;a href=&quot;http://arstechnica.com/news.ars/post/20060804-7424.html&quot;&gt;Ars Technica: Holographic storage a reality before the end of the year&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;While the HD-DVD / Blu-Ray market squabbles continue, yet another terabyte optical technique has been developed.  Research at the University of Central Florida developed a 3-D optical system that uses two different light wavelengths to write to multi-layer DVD media that promise more than a terabyte per disc.  No plans yet on market potential, but with so many terabyte optical techniques, one or more are bound to arrive soon (see: &lt;a href=&quot;http://news.ucf.edu/UCFnews/index?page=article&amp;amp;id=0024004105bd60439010c0c76ce2f00409b&quot;&gt;University of Central Florida: UCF Researcher’s 3-D Digital Storage System Could Hold a Library on One Disc&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Tape Scrolls On&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Not to be outdone, new developments in tape technology promise 15 times greater data density in new cassette form factors within five years.  This translates to roughly 8 TB per cartridge (see: &lt;a href=&quot;http://www.spacemart.com/reports/IBM_breakthrough_multiplies_the_amount_of_data_that_can_be_stored_on_tapes.html&quot;&gt;SpaceMart: IBM breakthrough multiplies the amount of data that can be stored on tapes&lt;/a&gt; and &lt;a href=&quot;http://www.wired.com/news/technology/0,70904-0.html&quot;&gt;Wired: Tape Storage Increases 15 Times&lt;/a&gt;).  With this sort of density, tape still offers the best price to capacity ratio and still out-carries all storage media short of large magnetic disc arrays.  The question of long-term reliability of tape is still debatable, however.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The X-Factor&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Moving on to more theoretical realms, scientists at the Max Planck Institute have made a breakthrough on a 40 year old theory that reveals tiny, closed magnetic circuits -- vortexes -- that demonstrate polar properties that could represent data bits.  This phenomenon occurs on a scale of about 20 atoms in diameter, which is much smaller than the single grains of magnetic material that Seagate hopes to exploit in the near future (see above).  Techniques exploiting this phenomenon are expected to be much more resilient against external disruptions such as heat and magnetic fields, but no word yet on a horizon for practical application or storage densities (see: &lt;a href=&quot;http://www.mpg.de/english/illustrationsDocumentation/documentation/pressReleases/2006/pressRelease200611281&quot;&gt;Max-Planck-Gesellschaft: Magnetic Needles turn Somersaults&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Even further out there is a scheme proposed by a Drexel University professor that claims 12.8 Petabytes in the space of a cubic centimeter!  The technique exploits the properties of nano-scale, ferromagnetic wires stabilized by water.  Again, commercialization seems quite a ways away, but this should provide good fodder for speculative fiction writers everywhere, at least until the shock of a Petabyte iPod Nano wears off (see: &lt;a href=&quot;http://www.drexel.edu/univrel/dateline/default_nik.pl?p=releaseview&amp;amp;of=1&amp;amp;f=20060508-01&quot;&gt;Drexel University: For a Bigger Computer Hard-drive, Just Add Water&lt;/a&gt;).&lt;/p&gt;
</description>
 <comments>http://thomas.kiehnefamily.us/digital_storage_update_2007#comments</comments>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/storage">Storage</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/technology">Technology</category>
 <pubDate>Thu, 15 Feb 2007 07:57:45 +0000</pubDate>
 <dc:creator>tkiehne</dc:creator>
 <guid isPermaLink="false">40 at http://thomas.kiehnefamily.us</guid>
</item>
<item>
 <title>The Future of the Hard Drive</title>
 <link>http://thomas.kiehnefamily.us/the_future_of_the_hard_drive</link>
 <description>&lt;p&gt;On (roughly) the 50th anniversary of the invention of the hard drive, Tom&#039;s Hardware interviews Seagate&#039;s Senior Field Applications Engineer Henrique Atzkern (&lt;a href=&quot;http://www.tomshardware.com/2006/09/14/50th_anniversary_hard_drive/&quot;&gt;Quo Vadis, Hard Drive? The 50th Anniversary of the HDD&lt;/a&gt;).  In it, we catch a glimpse of some of the ideas being explored for increasing hard drive density, speed, and reliability, among other things.  Parsing through the acronym alphabet soup and surface technicality, one thing remains clear: hard drive manufacturers are not running out of ideas for increasing storage capacity, so we can expect to continue seeing dramatic leaps in storage capacities.&lt;/p&gt;
&lt;!--break--&gt;&lt;!--break--&gt;&lt;p&gt;Let&#039;s look at this in terms of what we are storing.  Most people around my age can remember how any increase in storage capacity seemed to be followed immediately by increases in program size -- developers used the extra space to put more functionality and features into their programs.  The storage capacity gap has long since dwarfed the needs of applications and operating systems, but users have since taken the lead.  First, users struggled with storing images and audio while developers introduced new compression schemes to accommodate them.  Later, video reached the masses and started filling hard drives, even in greatly compressed states.  &lt;/p&gt;
&lt;p&gt;But the gap keeps expanding as hard drives increase in size.  Text documents are not getting any bigger, even though the applications that create them keep bloating.  Moving from binary to XML representations has not significantly increased word processing file sizes.  Same goes for images and audio -- the bits needed to losslessly represent a 1200 dpi scan have not increased, and the same goes for a 48 kHz digital audio file.  In fact, the bits needed to losslessly represent audio have actually &lt;em&gt;decreased&lt;/em&gt; with file formats such as &lt;a href=&quot;http://flac.sourceforge.net/&quot;&gt;FLAC&lt;/a&gt;. On the other hand, video storage requirements are still expanding.  DV quality is now giving way to HD and I would expect a few more developments before we reach a state where more bits does not yield better quality (for most typical applications).  &lt;/p&gt;
&lt;p&gt;The only thing left to close the gap for these types of digital media is to have a lot of them.  Even then, I expect that the total unused storage, taken across all systems, will increase as dramatically as the storage devices themselves.  This can only mean good things for those who want to &quot;save it all.&quot;&lt;/p&gt;
</description>
 <comments>http://thomas.kiehnefamily.us/the_future_of_the_hard_drive#comments</comments>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/storage">Storage</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/technology">Technology</category>
 <pubDate>Thu, 12 Oct 2006 21:34:02 +0000</pubDate>
 <dc:creator>tkiehne</dc:creator>
 <guid isPermaLink="false">35 at http://thomas.kiehnefamily.us</guid>
</item>
<item>
 <title>Musings on a Systems View of Digital Archives</title>
 <link>http://thomas.kiehnefamily.us/musings_on_a_systems_view_of_digital_archives</link>
 <description>&lt;p&gt;As an outsider trying to grasp the bounds of the archival field at the same time that it is entering a period of unprecedented change, I frequently find myself attracted to discussions about this change.  Not only do such discussion clarify for me the boundaries of the field, but they give me insight into where I might be of help in the future.  In two recent bulletins of the Society of American Archivists (SAA), society president, Richard Pearce-Moses, initiated one such discussion.&lt;/p&gt;
&lt;p&gt;In his &lt;a href=&quot;http://www.archivists.org/periodicals/ao_backissues/AO-Sept05.pdf&quot; rel=&quot;nofollow&quot;&gt;first message&lt;/a&gt; (Sep/Oct 2005, p. 3 &amp;amp; 23) takes the tack that the new technical environment will change the &quot;how&quot; of archival practice while leaving the essential archival functions intact.  He elucidates several areas that archivists (and people in general) take for granted in the course of working with physical records.  Among these are common familiarity with paper artifacts, language, searching for files in filing cabinets, using photocopiers, and so on.  Each of these activities is so familiar to contemporary life that they disguise a host of assumptions made by the people performing them. Pearce-Moses uses metaphor to describe different ways in which these common practices can be transformed into new methods for understanding digital records:  letters have become email; &lt;a href=&quot;/notes_diaries_on_line_diaries_and_the_future_loss_to_archives_by_catherine_osullivan&quot; rel=&quot;nofollow&quot;&gt;diaries have become blogs&lt;/a&gt;, etc.   He summarizes by stating that &quot;we must not remain focused on the old and familiar,&quot; which I believe is a bit overbroad.  Perhaps, instead, all assumptions of the old practice must be taken with skepticism and re-evaluated in order to make a successful transition -- not so much a focus on only the new, but equally and intelligently on both the old and the new.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;http://www.archivists.org/periodicals/ao_backissues/AO-Jan06.pdf&quot; rel=&quot;nofollow&quot;&gt;second message&lt;/a&gt; (Jan/Feb 2006, p. 3 &amp;amp; 23) takes a decidedly different approach, integrating feedback about more fundamental ways in which the profession may change. Specifically, he addresses Joan Krizack&#039;s question: &quot;what is our job in the digital era?&quot;  Perhaps the most prescient observation he makes is that information technologists are staking claims on archival turf.  This is evident in &lt;a href=&quot;http://googleblog.blogspot.com/2006/02/history-deserves-best.html&quot; rel=&quot;nofollow&quot;&gt;Google&#039;s forays&lt;/a&gt; into digitization and indexing of books, images, videos, etc.  But such &quot;trespassing&quot; is not new, and not restricted to specific projects.  For instance, traditional archivists are sure to be taken aback at how  operating systems, data replication software, and all sorts of applications use the term &quot;Archive&quot; to mean nothing more than a simple backup of electronic files.  This sort of transgression is endemic of an environment that is focused on short term retention of data in jealously guarded proprietary formats.   As the gap widens between the well worn practice of paper records and the lack of progress in developing procedures for handling the ever burgeoning corpus of electronic records, software vendors take the upper hand in determining the future direction of digital archives.  &lt;/p&gt;
&lt;p&gt;Large institutional digital archives projects have gone a long way in bringing technology vendors into the archival process, but a much broader approach is needed -- one that instills into programmers and software architects an awareness of the long term efficacy of the artifacts that their products create.  And this is the crux of the issue, in my opinion: the major problem that archivists are fretting about is due in no small part to the practices and conventions of software makers and Web site designers. No &quot;solution&quot; is complete without bringing these practices into line with solid archival principles, that is, if we wish to actually retain most of this stuff for the long term.  The &lt;a href=&quot;http://en.wikipedia.org/wiki/OpenDocument&quot; rel=&quot;nofollow&quot;&gt;Microsoft Office versus OpenDocument debates&lt;/a&gt; are only the beginning.&lt;/p&gt;
&lt;p&gt;Much of the new digital information created outside of these large institutional projects are unlikely to be able to take advantage of such forward looking research.  For this, we need a systems theory of archival practice, one that encompasses not only the entire record lifecycle, but the very matrix from which records emerge -- the software and operating systems in which they are created.  I think a focus on this strategy, though not as effective in solving the problems inherent in already existing electronic records, would eliminate much of the pensiveness of archivists over digital records in the future.  Such a strategy, I believe, creates a number of new job descriptions for archivists.&lt;/p&gt;
</description>
 <comments>http://thomas.kiehnefamily.us/musings_on_a_systems_view_of_digital_archives#comments</comments>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/digital_archives">Digital Archives</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/technology">Technology</category>
 <pubDate>Tue, 14 Mar 2006 07:10:26 +0000</pubDate>
 <dc:creator>tkiehne</dc:creator>
 <guid isPermaLink="false">27 at http://thomas.kiehnefamily.us</guid>
</item>
<item>
 <title>Audio Encoding Project: Software Status Update</title>
 <link>http://thomas.kiehnefamily.us/audio_encoding_project_software_status_update</link>
 <description>&lt;p&gt;Last we heard, I was &lt;a href=&quot;/audio_encoding_project_initial_progress_report&quot; rel=&quot;nofollow&quot;&gt;having problems&lt;/a&gt; with the Winamp/FLAC combination for ripping CDs.  As it turns out, I was not only unsuccessful in figuring out what the exact issues were, let alone resolving them, but it turns out that the ripping process was not being entirely transparent about non-obvious errors.  I discovered upon listening to some of the encoded files that there were occasional errors in the form of digital audio glitches, and, in some cases, truncated (prematurely ending) files.  The ripping process did not inform me that there were any problems outside of the major errors that I occasionally encountered (and which prompted me to investigate in the first place).  Fortunately, these hidden errors were not too frequent.&lt;/p&gt;
&lt;p&gt;Frustrated by these persistent problems with Winamp, I began to look for a better solution.  Some Googling and freeware searches later, I discovered &lt;a href=&quot;http://www.dbpoweramp.com/&quot; rel=&quot;nofollow&quot;&gt;dbPowerAMP&lt;/a&gt;, a free ripper/converter whose makers claimed to have been frustrated about error-prone rippers in much the same way as I.  dbPowerAMP meets all of the &lt;a href=&quot;/audio_encoding_project_initial_progress_report&quot; rel=&quot;nofollow&quot;&gt;specifications&lt;/a&gt; that I enumerated at the outset of the project, so I decided to give it a try.&lt;/p&gt;
&lt;p&gt;Initial tests were encouraging at first.  The program ripped to FLAC quickly and easily.  A checksum is computed for each track which is checked against an online database that verifies an error-free rip.  Such a distributed error-checking method is useful, but the single point-of-failure inherent in the centralized database is a bit worrying for long-term viability of the solution.  I need to verify that there is a fallback in case the database is not accessible, in other words, that there is some means of error-checking or prevention present in the local system.  I am also a bit disappointed that this technical metadata is not automatically appended to the file metadata -- I may either suggest this feature to the developers or figure out a way to script it myself.&lt;/p&gt;
&lt;p&gt;The problem I ran into, however, was that the FLAC codec did not set the ID3 file metadata.  After both my first and second run, the ID3 tags were empty.  Later, however, I updated the FLAC codec to a more recent version which seemed to resolve the issue.  FreeDB metadata is automatically appended to an ID3 tag, as desired.&lt;/p&gt;
&lt;p&gt;The only remaining benefit that the Winamp solution holds over dbPowerAMP is that Winamp had the ability to automatically create a playlist file for the disc; the current solution does not.  To compensate for this lost information, I set the filenaming macro to include the track number along with the artist and track name.  This way, the directory structure will retain the original order, which will allow me to retroactively generate playlist metadata.&lt;/p&gt;
&lt;p&gt;This is the current environment:&lt;/p&gt;
&lt;p&gt;Software:&lt;br /&gt;
dbPowerAMP version 11.5, Windows 2000 SP 4&lt;br /&gt;
FLAC codec for dbPowerAMP version 5.3 (using FLAC 1.1.2)&lt;/p&gt;
&lt;p&gt;Hardware:&lt;br /&gt;
Dell Latitude C600, Pentium III 700 MHz, 512 Mb Ram, 32x DVD/CD combo drive&lt;br /&gt;
Seagate 300 Gb USB/Firewire combo external drive&lt;/p&gt;
&lt;p&gt;At this very moment, I have completed 8 rips in a row, with only one reported (and properly handled) error.  Ripping speed averages 4-5 times realtime which translates to 10-14 minutes per full-length CD.  I could probably see significant improvement in speed were I using a faster computer (decreases encoding time) and connecting to the external drive using firewire instead of USB 1.1 (I cannot use both my firewire and network cards at the same time).  If we use 10 minutes as a baseline average for encoding, the remaining 750 CDs will take approximately 125 hours (5 24-hour days) of linear effort to encode.&lt;/p&gt;
&lt;p&gt;Wish me luck!&lt;/p&gt;
</description>
 <comments>http://thomas.kiehnefamily.us/audio_encoding_project_software_status_update#comments</comments>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/audio_encoding_project">Audio Encoding Project</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/digital_archives">Digital Archives</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/technology">Technology</category>
 <pubDate>Sun, 08 Jan 2006 05:29:08 +0000</pubDate>
 <dc:creator>tkiehne</dc:creator>
 <guid isPermaLink="false">22 at http://thomas.kiehnefamily.us</guid>
</item>
<item>
 <title>New Mass Storage Technology and Research</title>
 <link>http://thomas.kiehnefamily.us/new_mass_storage_technology_and_research</link>
 <description>&lt;p&gt;In the &lt;a href=&quot;/digital_preservation_plan_for_the_texas_legacy_project&quot;&gt;CHAT digital video preservation plan&lt;/a&gt; I presented an overview of digital archives technologies that includes metadata, digital storage, file formats, and repository systems and software.  Of these, digital storage technology is the most rapidly developing and changing area, with constant change in price per giga-(tera-, peta-)byte and media formats.  In the plan I hint at the fact that optical storage media (DVDs, CDs, etc) fall far short of the storage capacities of currently available hard disk drives and arrays.  The gap is quickly closing, however, as improved storage media are announced with increasing frequency.  In preparation for a revision of the technology review as a standalone digital video technology primer, I&#039;d like to document some of these recent developments.&lt;/p&gt;
&lt;!--break--&gt;&lt;!--break--&gt;&lt;p&gt;
&lt;b&gt;Blu-Ray &amp;amp; HD-DVD&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://news.yahoo.com/s/pcworld/123491&quot;&gt;Consumer electronics manufacturers continue to wrangle&lt;/a&gt; over control and support of these two standards.  A PC World article (via Yahoo News) reveals the true nature of the conflict: content.  This particular standards &quot;war&quot; demonstrates the quagmire that results when technology and intellectual property, in its current form, collide.  This revelation, coupled with the impending release of holographic technologies that will easily dwarf the capacities of both Blu-Ray and HD-DVD, indicates that as far as serious archival and data storage needs go, Blu-Ray and HD-DVD will probably not present any viable mass data storage solutions.  It is more likely that these format(s) will end up replacing DVD in the near term as consumer level media.  Archivists in the future may have to deal with preserving the end product, but they will do so using their holographic storage contemporaries.  If nothing else, the Blu-Ray/HD-DVD fight demonstrates how overactive intellectual property fears truly stifle innovation and development.&lt;/p&gt;
&lt;p&gt;As if the competition weren&#039;t enough to cause serious concern over the archival viability of either format, Blu-Ray has committed to supporting &lt;a href=&quot;http://www.blu-ray.com/news/?date=2005-08-09&quot;&gt;various DRM technologies&lt;/a&gt; (pdf).  These include watermarking (ROM Mark), crypto and licensing (AACS), and code updatability (BD+).  The effect on pure data applications of Blu-Ray are not yet known, but any forced DRM regime that prevents open access to data would spell the end of any consideration of Blu-Ray for archival storage.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Holographic Discs&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://en.wikipedia.org/wiki/Holographic_Versatile_Disc&quot;&gt;Wikipedia&lt;/a&gt; has a brief entry for these formats that might prove informative as these technologies develop.&lt;/p&gt;
&lt;p&gt;Maxell / InPhase Technologies have announced mass holographic storage.&lt;br /&gt;
(via &lt;a href=&quot;http://www.theregister.co.uk/2005/11/24/maxell_holo_storage/&quot;&gt;The Register&lt;/a&gt; and &lt;a href=&quot;http://hardware.slashdot.org/article.pl?sid=05/11/28/141241&quot;&gt;Slashdot&lt;/a&gt;)&lt;br /&gt;
Current prototypes suggest 300 GB of storage per DVD sized discs with 20 Mbps transfer rate planned for release by late 2006.  Five year projections put this version of holographic storage at 1.6 TB per disc with 120 Mbps transfer rate.  Anticipated archive life of these discs is estimated at greater than 50 years.  Turner Networks is already &lt;a href=&quot;http://www.computerworld.com/hardwaretopics/storage/story/0,10801,106288,00.html&quot;&gt;testing the technology&lt;/a&gt; for an anticipated conversion from tape (via ComputerWorld)&lt;br /&gt;
This technology is in direct competition with Optware HVD (Japan), a technology originally mentioned in the plan (p. 26).&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Subwavelength optical data storage&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;In a similar vein to Multiplexed Optical Data Storage (MODS), Iomega announced &lt;a href=&quot;http://www.physorg.com/news4249.html&quot;&gt;Articulated Optical - DVD&lt;/a&gt; (AO-DVD) (via Physorg.com) which promises up to 850 GB using reflective nano-structures.  Little else is known beyond the initial announcement and patent filings.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Solid State Memory&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Flash Memory Hard Drives / Solid State Discs (SSDs) &lt;a href=&quot;http://www.newscientist.com/article.ns?id=mg18625025.100&quot;&gt;are increasing in capacity&lt;/a&gt; (via New Scientist), though still behind those of the cutting edge in optical technologies.  16 GB capacities are promised soon and intended to replace hard drives in portable and small computing applications.  Though these may be more stable than hard drives (vis: no moving parts) they are less reliable in the long term than optical storage (vis: magnetic degradation) and are not likely to overtake holographic technologies for archival use, even if they achieve similar capacities.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://www.nantero.com/&quot;&gt;Nantero&lt;/a&gt; announced that it is developing carbon nanotube based storage called NRAM (Nonvolatile Random Access Memory).&lt;br /&gt;
Initial prototypes have reached 10 GB in 13 cm wafers that are about 10 times faster than current flash memory.  With refinement, these could provide competition for flash memory, especially in very small applications. but for the same reasons are not likely to overtake mass optical storage.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Mass storage arrays and distributed file systems&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;A Slashdot posting  regarding &lt;a href=&quot;http://ask.slashdot.org/article.pl?sid=05/10/25/1752241&quot;&gt;&quot;home grown&quot; multi-terabyte storage arrays&lt;/a&gt; yields some interesting resources for distributed file systems.  One or more of these implementations could form the basis for future network-fabric storage.  More research is warranted; in no particular order:&lt;br /&gt;
&lt;a href=&quot;http://www.lustre.org/&quot;&gt;Lustre&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://now.cs.berkeley.edu/Xfs/xfs.html&quot;&gt;xFS&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://www.parl.clemson.edu/pvfs/&quot;&gt;PVFS&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://www.neopathnetworks.com/products_overview.htm&quot;&gt;File Director&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://www.redhat.com/software/rha/gfs/&quot;&gt;GFS&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://www.apple.com/xsan/&quot;&gt;Xsan&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://www.openafs.org/&quot;&gt;AFS&lt;/a&gt;&lt;br /&gt;
&lt;a href=&quot;http://www.ibrix.com/&quot;&gt;IBRIX&lt;/a&gt;&lt;/p&gt;
</description>
 <comments>http://thomas.kiehnefamily.us/new_mass_storage_technology_and_research#comments</comments>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/storage">Storage</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/technology">Technology</category>
 <pubDate>Tue, 29 Nov 2005 08:07:54 +0000</pubDate>
 <dc:creator>tkiehne</dc:creator>
 <guid isPermaLink="false">20 at http://thomas.kiehnefamily.us</guid>
</item>
<item>
 <title>More on Security vs. Usability</title>
 <link>http://thomas.kiehnefamily.us/more_on_security_vs_usability</link>
 <description>&lt;p&gt;In my &lt;a href=&quot;/systems_security_problems_and_potential_solutions&quot;&gt;Systems Security&lt;/a&gt; article, I addressed one of two areas that affect systems security: the continuum between software usability and the ability of the software to perform securely.  Scanning through Slashdot one day I came across a &lt;a href=&quot;http://books.slashdot.org/article.pl?sid=05/11/02/1533205&quot;&gt;book review&lt;/a&gt; about an O&#039;Reilly book about this topic.&lt;/p&gt;
&lt;!--break--&gt;&lt;!--break--&gt;&lt;p&gt;The review of &lt;i&gt;Security and Usability&lt;/i&gt; indicates a coverage of the topics covered in my paper, but in a more extensive scope than my pedestrian (which is often the case for products that fulfill class requirements) review.  The ever-present human element, the core of the usability question, is covered as is the topic of secure (trusted) systems.  Whereas my paper covered primarily end-user systems with network access, this book appears to go deeper into network security issues such as authentication and online privacy.&lt;/p&gt;
&lt;p&gt;I&#039;ll see about getting into this book soon in order to give it a more thorough evaluation at a later time.&lt;/p&gt;
</description>
 <comments>http://thomas.kiehnefamily.us/more_on_security_vs_usability#comments</comments>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/security">Security</category>
 <category domain="http://thomas.kiehnefamily.us/blog_topics/technology">Technology</category>
 <pubDate>Mon, 21 Nov 2005 03:01:55 +0000</pubDate>
 <dc:creator>tkiehne</dc:creator>
 <guid isPermaLink="false">17 at http://thomas.kiehnefamily.us</guid>
</item>
</channel>
</rss>
