SanDisk takes on Nano with new Sansa View

Just two weeks ago SanDisk announced its "Clip" MP3 player, which was SanDisk's shot at the iPod Nano purchasing audience, so it comes as no surprise that just five days after Apple announced its latest lineup of players, SanDisk is ready to compete. Today it announced its latest player, the Sansa View. HangZhou Night Net

The Sansa View looks similar to Apple's previous generation of the iPod Nano—tall and skinny, and with navigation controls just below the screen. The View is available in two models: 8GB and 16GB, and is priced dead-on with the latest Nano: $150 or $200 for the 8GB or 16GB models, respectively. The View also one-ups the 3rd generation Nano with its microSD/microSDHC slot, which provides up to 24GB of extra storage.

SanDisk's offering supports H.264, WMV, and MPEG4 video playback at up to 30 frames per second, as well as DivX—if you use Sansa's Media Converter software. Audio formats supported include MP3, WMA, WAV, not to mention the FM radio. In comparison, the Nano supports H.264, mp4, m4v, and MPEG-4 DRM-protected and homemade video.

Where the Nano has a 2" 320×240 screen, the View has a slightly larger 2.4" 320×240 screen. However, the View's screen is advertised as being a "wide-screen." In order for its screen to be "wide," it looks like users would have to hold the View length-wise instead of in its natural vertical state. I hope this is the case, since I can't imagine how bad the video would look scrunched up on a screen that is taller than it is wide.

Finally, the View has a battery life of 35 hours for audio and 7 hours for video. Advertised battery life for the Nano is 24 hours audio and 5 hours for video. The View is only available in black, while the Nano comes in five colors.

Judge deals blow to RIAA’s boilerplate copyright infringement complaints

In the over 20,000 file-sharing cases filed so far, the RIAA has relied on a simple procedure: scour P2P networks for shared music, file a John Doe lawsuit to learn the identity of the account-holder responsible for the IP address flagged by the RIAA's investigative arm, and, if the account-holder doesn't agree to the RIAA's settlement terms, file a lawsuit using a boilerplate complaint. A federal judge in California has now refused to grant the RIAA a judgment based on just such a complaint, forcing the RIAA to draw up a new complaint containing specifics. HangZhou Night Net

Yolanda Rodriguez was sued by the record labels for copyright infringement in November of last year. Apparently, Rodriguez is of the "ignore the problem and hope it will go away" mindset, as she never filed an answer to the complaint, and a search of the case history shows no action on her part to fight the lawsuit.

Given Rodriguez's inaction, the clerk entered a notice of default this past April. In July, the record labels asked the court for a default judgment in the amount of $3,750 (five songs at $750 each) plus $420 in court costs. Judge Rudi M. Brewster declined to give the RIAA what it was asking for, ruling that the plaintiffs' boilerplate complaint "fails to sufficiently state a claim upon which relief may be granted."

Drawing on the recent Bell Atlantic v. Twombly case decided by the Supreme Court, Judge Brewster held that the RIAA's complaint wasn't sufficient to merit a default judgment. "[O]ther than the bare conclusory statement that on 'information and belief, Defendant has downloaded, distributed and/or made available for distribution to the public copyrighted works, Plaintiffs have presented no facts that would indicate that this allegation is anything more than speculation," wrote the judge. "The complaint is simply a boilerplate listing of the elements of copyright infringement without any facts pertaining specifically to the instant Defendant."

Bell Atlantic v. Twombly involved allegations that the Baby Bells engaged in an anticompetitive conspiracy to hinder local phone and broadband competition. The Supreme Court ruled that the mere fact that a conspiracy was conceivable and that the companies engaged in conduct that supported the conspiracy allegations wasn't enough for a lawsuit to proceed.

Judge Brewster vacated the entry of default but gave the RIAA 30 days to refile the complaint and serve Rodriguez with it. It took the RIAA little less than a week to file an amended complaint. In contrast to the original complaint, which was extremely short on specifics, the RIAA's latest filing offers more in the way of details. Those details include the date the RIAA spotted the PC it believes was used by Rodriguez on Gnutella, the IP address, and a list of recordings in the user's shared folder.

There's still a great deal of "information and belief," however. The RIAA is "informed and believe[s]" that Rodriguez "had continuously used and continued to use a P2P network to download and/or distribute to the public" the files contained in the shared folder as well as "additional sound recordings owned by or exclusively licensed to" the labels.

In fact, the only significant difference between the original and amended complaint are the dates, IP address, the name of the network, and screenshots showing each and every file seen in the shared folder allegedly residing on Rodriguez's PC. Of course, if Rodriguez once again fails to show up in court, that may be enough to grant a default judgment.

Judge Brewster's decision may have ramifications for two contested lawsuits, Elektra v. Barker and Warner v. Cassin. The judges in both cases have indicated their intention to rule on a central facet of the RIAA's complaints, that making a song available over a P2P network constitutes copyright infringement. Copyright attorney Ray Beckerman, who is defending both Barker and Cassin, points out that the judge's ruling in Interscope v. Rodriguez supports the arguments made in the other two cases.

Google goes to court in Australia over sponsored links

Google appeared in court today in Australia to fight charges of "misleading and deceptive conduct" regarding its sponsored links. The Australian Competition and Consumer Commission (ACCC) told the judge that the search giant does not do enough to differentiate sponsored links in its search results from regular search results. HangZhou Night Net

"Google represents to the world that its search engine is so good that it can rank, out of the multitudinous entries of the worldwide web, these entries in order of relevance of the user's query," ACCC barrister Christine Adamson told the court, according to AFP. "Part of that (reputation is) that it's not influenced by money, it's influenced by relevance."

The ACCC isn't a fan of the company's allowing sponsored links purporting to represent one company when, in fact, they point to a competitor. The group said that, in 2005, an Australian classifieds called the "Trading Post" purchased sponsored links from Google with the names of two competing dealerships, Kloster Ford and Charlestown Toyota. The ACCC says that the Trading Post violated sections 52 and 53(d) of the Trade Practices Act of 1974 and blames Google for allowing it to happen in the first place. The organization asked for an injunction that would ban Google from publishing sponsored links representing a relationship between businesses that doesn't exist, clearly distinguished sponsored link results, for Google to establish a trade practice and compliance program, and costs.

If the ACCC wins, Google will need to implement some major changes to its sponsored link program that could ultimately drive up costs to advertisers. As it stands right now, the system is mostly automated—in order to prevent companies from purchasing sponsored links under the names of other companies, Google would have to add in a somewhat significant level of human monitoring to the process, which would increase its overhead costs.

Until next month, however, we're only left to speculate on what might happen in this case. Australian Federal Court judge Jim Allsop adjourned the case until October 4.

SCO to face judge, not jury, in Novell trial

The remaining claims in the legal battle between SCO and Novell will not be heard by a jury, Judge Dale A. Kimball said in a decision granting Novell's motion to strike SCO's demand for a jury trial. HangZhou Night Net

Kimball effectively ended SCO's "slander of title" lawsuit against Novell last month when he issued a ruling declaring that Novell—and not SCO—is the owner of the original UNIX copyrights. At the time, Judge Kimball also determined that SCO had breached its fiduciary duty to Novell by failing to turn UNIX licensing royalties over to Novell.

Under the terms of SCO's original agreement with Novell, SCO was permitted to sell UNIX licenses to third parties but had to turn all but 5 percent of the royalties over to Novell. Although Judge Kimball has already determined that SCO owes 95 percent of its UNIX royalties to Novell, the question that remains is what portion of royalties collected by SCO from UNIX-related licensing agreements was for UNIX specifically and how much was for assorted UnixWare intellectual property that SCO developed independently.

SCO collected over $25 million from Microsoft and Sun through UNIX licensing agreements. Although SCO claims now that those agreements were primarily for UnixWare intellectual property, there is very little evidence to support that assertion. In fact, SCO's description of those agreements in a July 2003 SEC filing seem to indicate that the agreements related directly to UNIX source code, note UnixWare:

"[One of the licenses] was to Microsoft Corporation ("Microsoft") and covers Microsoft's UNIX compatibility products, subject to certain specified limitations. These license agreements are typical of those we expect to enter into with developers, manufacturers, and distributors of operating systems in that they are non-exclusive, perpetual, royalty-free, paid up licenses to utilize the UNIX source code, including the right to sublicense that code."

To determine whether the question should be brought before a jury, Judge Kimball had to first evaluate the nature of the claims and determine if the remedies sought fall under common law or equity. Traditionally, the right to a jury trial does not exist in breach-of-contract cases where the remedy sought by the plaintiff is simply enforced fulfillment of a contractual obligation rather than monetary damages. In the absence of the right to a jury trial, the trial is brought before a judge instead. "In this case, the court has found that Novell has an equitable interest in the SVRX Royalties and met the requirements for imposition of a constructive trust for the amount of SVRX Royalties improperly in SCO's possession," Judge Kimball wrote in his decision. "Therefore, the court concludes that Novell's breach of contract, breach of fiduciary duty, constructive trust/restitution/unjust enrichment, and conversion claims are equitable in nature given the nature of the relief sought under these claims and the limited issues remaining for trial. Accordingly, none of these claims provide a right to a jury trial."

SCO's current assets add up to just under $20 million, and the company continues to report losses every quarter. It seems likely that the company's days are numbered.

PTC forgets about DVRs, trashes trashy TV fare

Are appropriate television programs for children growing scarce, or have parents never had it so good? As usual, it depends whom you ask and how you measure things. If you ask the Parents Television Council ("Because our children are watching"), they'll tell you that the situation is dire. In a new report on the so-called "family hour" (PDF), the PTC laments the state of the 8-9PM block, calling it "even more hostile to children and families" than at any time in the past. HangZhou Night Net

In fact, the report talks lovingly about the National Association of Broadcasters Code of Conduct ("the Code") which governed TV studios for years until it was tossed out on antitrust grounds. In PTC-land, television is a dangerous wasteland of "crap," "suck," and "douche bag" (all on the "foul language" list), and prime time is no exception.

Children, as the group's tagline puts it, certainly are watching. The PTC quotes an independent study that showed that children ages 8-18 watch an average of three hours of TV per night. But it's what's on the TV that bothers the PTC, not so much the fact that many American children are putting in the equivalent of a part-time job during the school year in front of a TV set.

Sexual content during the "family hour" has risen by 22.1 percent in the last six years and violence is up 52.4 percent. Foul language, surprisingly, is down by 25 percent, though the "non-minor swear words" have made some significant (if bleeped) gains. The fixation on quantifiable bad behaviors leads the report's authors to churn out some odd verbiage ("rates of bleeped 'shit' increased more than fourfold"), but the basic conclusion seems solid: prime time broadcast television is filled with plenty of things that plenty of parents don't want their 24-hours-of-TV-a-week kids to see.

More interesting than the report's litany of complaints (did you know that American Dad was the worst-rated show on television, with a reported 52 "incidents" an hour?),is a rejoinder to the report by the Progress and Freedom Foundation's Adam Thierer, who responds to the PTC's "worst of times" rhetoric with some "best of times" thoughts.

Thierer's basic point is that these are golden years for consumer choice when it comes to video content. The massive installed base of DVD players, the explosion of cable TV channels, the rise of video on demand, and the power of the DVR have all made broadcast schedules less relevant than they have ever been.

"I happen to agree with the PTC that not all of the programming shown on broadcast TV at 8 p.m. is appropriate for my children," writes Thierer. "But like millions of other parents, I can now take matters into my own hands."

The assertion is borne out by parenting practices here in the Orbiting HQ. While some might question the wisdom of raising children in a zero-gravity environment, you just can't beat the view. Plenty of us on staff here have small children, and DVDs and DVR libraries of children's programming have proved to be a great way to keep the kids entertained with appropriate material (once they finish cleaning out the air scrubbers, of course).

As content options proliferate onto the web and even the traditional network broadcasters put complete streaming shows online, scheduled programming is losing much of its clout. With consumers watching more material on their own schedules, "family hours" might be getting more crass, but there's less need for those who crave a TV fix to get it when the broadcasters offer it.

Human adaptation to food and travel

The completion of the human genome, along with the genomes of some of our closest primate relatives, has allowed us to examine the evolutionary changes in our species with remarkable precision. Results on the topic keep pouring in at rates I would never have predicted. Two more papers published over the weekend in Nature Genetics that take a look at some recent adaptations to what we eat and how we get ahold of it. The results show how either the gain or loss of a gene can make for a net gain in human fitness. HangZhou Night Net

The first paper is another example of how our genome is adapting to our diets. It looks at the amylase gene (AMY1), which helps break down starches. Starches have become a significant part of the human diet, at least in part through the advent of agriculture. A look at the AMY1 gene sequence did not reveal any major differences in the actual sequence; instead, humans seem to typically carry about six copies of the gene, or three times the number found in chimps.

The researchers found that the amount of amylase produced scaled in a linear manner with the number of gene copies, suggesting that this high copy number can be adaptational. They also found that people in societies with high starch diets had a small but statistically robust increase in AMY1 copy numbers: 70 percent of high starch eaters had over six copies of the gene, compared to under 40 percent of those with low-starch diets. This held true in geographically diverse populations, including those who got starch primarily from hunting/gathering, suggesting that selective pressures continue to influence gene copy numbers. The authors say that this is the first time that copy number variations, known to be common in humans, have been shown to be adaptational.

The second paper looks at a protein called α-actinin, a specialized form of which is involved in fast muscle function. One allele of the muscle-specific gene in humans, ACTN3, has a mutation that creates a shortened, non-functional protein. Roughly a billion humans are estimated to carry this null mutation, but the reasons we're carrying a damaged gene have been unclear. Some tantalizing data came in the form of studies of elite athletes: sprinters tended to have a functional version of ACTN3, while distance runners lacked it.

The research team knocked out the mouse version of the gene, and examined how the muscles in those mice changed as a result. They found that these muscle fibers showed a big boost in the activity of proteins involved in aerobic metabolism, which efficiently produces usable energy when the muscles are not oxygen starved. The authors explored whether this change towards slower but more efficient muscles might have been adaptational by looking for signs of a selective sweep; they found it in two populations. In Asians, the sweep appears to have started about 30,000 years ago, while it only began in Europeans about 15,000 years ago. That time should be sufficient for the damaged gene to have gone to 100 percent frequency, so the authors suggest that there must be some other factors influencing the selective pressure.

Overall, these results (along with related data on the lactase gene and schizophrenia) show just how much has been going on recently in human evolution. Many of these changes appear to have occurred after the origin of modern humans, emphasizing how selective pressures continue to shape our genetic inheritance. The other point that these studies make is that all of these adaptative changes are going on at the same time. In short, evolution is a parallel process—by solving many problems at once, it makes up for the inefficiencies and randomness inherent in its operation.

Nature Genetics, 2007. DOI: 10.1038/ng2123
Nature Genetics, 2007. DOI: 10.1038/ng2122

Barcelona’s out, and the “reviews” are… out

Today is the official birthday of AMD's quad-core Barcelona, and finally the wait is over. (It's also my official birthday… kind of funny that I share one with a CPU.) I've covered pretty much everything launch-related you'd want to know about Barcelona in previous posts—pricing and launch speeds, likely microarchitecture, system architecture, and big picture and competitive positioning, to name a few—so I'll devote the launch-day coverage below to taking a quick walk through the revelations that launch day brings, such as they are. HangZhou Night Net

There aren't many reviews out this morning, and the few that are up aren't worth looking at (see the next section for why this is the case). The only bright spot in this picture is Scott's review at Tech Report, which is about as close as anybody could come to a "good" review of a brand new system with only one weekend to tweak and poke. And Scott's review is good because it raises almost as many questions as it answers.

First up, Scott's results show that Xeon rules in cache bandwidth and Barcelona rules in main memory bandwidth. Latency, however, is a different and more disappointing story. It seems that Barcelona's L3 cache latency is high enough that it causes at least some benchmarks to score the cumulative latency of Barcelona's memory hierarchy as not much better than Xeon's. It's going to take some more digging by benchmarking with real-world apps to see how much of this latency problem is an artifact of the benchmark's scoring mechanism, where latency is cumulative over the entire hierarchy, and how much this actually impacts real applications.

As expected, Xeon seems to keep the leadoverBarcelona in integer performance, not that it's easy to tell, since most benchmarks were floating-point-centric. And speaking of floating-point, Barcelona's floating-point showing is a real head-scratcher. Contrary to what I suggested in my previous post on Barcelona, the launch-date "reviews" uniformly show Barcelona with little or no floating-point advantage over Xeon.

Barcelona's floating-point performance

When it comes to Barcelona's floating-point results, there are two major issues to think about here: two-socket versus four-socket and clockspeed scaling. First, all the systems in the reviews that I saw were dual-socket. It has been my contention for some time now (see the posts linked above) that Barcelona's real chance to shine will be in four-socket systems. This is because Barcelona's main advantage is in the bandwidth advantages afforded it by its system architecture, and those advantages really begin to kick in with four-socket configurations.

A related issue is that on a per-core basis, Barcelona's floating-point performance may just not be good enough. The Core-based Xeons have extremely muscular floating-point and vector hardware, and I definitely didn't think that Barcelona would surpass it (or even match it, really) on a per-core basis. However, it's clear that Xeon is bottlenecked by its system architecture, so I thought that Barcelona's ample bandwidth would give it a floating-point edge.

However, Xeon's bandwidth bottleneck isn't nearly so pronounced in two-socket configurations as it is in four-socket configurations, so at two sockets the two processors' respective floating-point ALUs can duke it out on a relatively more level playing field. In this scenario, Xeon's superior floating-point and vector hardware carries it and gives it the 3D rendering scores that match and beat Barcelona's.

So my previous prediction that a four-socket Barcelona will still dominate in floating-point performance and performance/watt has yet to be tested by any of the reviewers. Let's hope we see some tests of this soon.

As far as clockspeed goes, it seems that Barcelona has just enough there that a good round of clockspeed boosts could put the results in a different light. But by the time those much-needed clockspeed gains materialize, Intel's 45nm "Penryn" Xeons will be upon us, and the landscape will have changed again.

You call this "reviewing"?

What if I told you that you could benchmark a brand new microprocessor architecture—indeed, a brand new system architecture (since upgrading from dual-core Opteron to quad-core Barcelona in the same socket really is a substantial change in overall system architecture)—in just a weekend?

If I told you that, I'd be lying. And this is why almost all of the handful of Barcelona "reviews" that went live today do little to increase our knowledge of AMD's latest. In a move that we've seen again and again from hardware companies that want to stack the review deck on launch day, AMD shipped Barcelona systems to hardware reviewers on Friday for a launch on Monday. This kind of behavior is designed to produce failed reviews that are shaky and opaque so that the hardware company can control the launch-day media narrative through a combination of an information shortfall and of spinning the little bit of info that is there.

But of course, everyone who does launch-day reviews knows the game, and reviewers do have a choice in whether they want to go along with the industry-standard abuse and manipulation in the name of being "first." So there's plenty of blame to go around.

Magic eight-ball says…

By way of conclusion, here's a quick summary of my launch-day impressions of Barcelona:

Single-socket desktop: Barcelona underperforms Core-based offerings from Intel, so AMD will have to ratchet up the clockspeed and keep prices down to be competitive with it.Dual-socket servers and workstations: In spite of the reviews, the picture here is murky. Intel is going to continue to carry this area in terms of raw performance, at least until Barcelona's clockspeed gains materialize. As far as performance/watt, the story may be different. It seems to me that Barcelona is a lot more competitive here as a platform than it is in raw performance, so this factor may actually keep AMD from losing more ground than it already has to Intel in the dual-socket space.Four-socket servers and HPC: My prediction—repeated multiple times over the last six months—that Barcelona will win in four-socket floating-point performance and performance/watt is still untested, so I hope someone tackles that soon. As for integer performance and performance/watt, Tigerton will be very, very hard to beat. Intel's system engineers have done a fantastic job with what they have to work with, and it shows in Tigerton's integer performance. I'm not confident that Barcelona can really take them down here, but I'd love to be proven wrong.

I'll be out in San Francisco this week, and I may end up meeting with AMD while I'm there. So we'll see if and how these initial impressions change in the coming days.

The quantum mechanical mirror

Earlier this year we reported on results that showed how small mirrors could be cooled by laser light. Later, at the European conference on electro-optics, there was much excitement and a few arguments over whether these mirrors could be thought of as quantum objects and if so, could the light and the mirrors be entangled and what would that mean exactly? In many ways, this argument was quite abstract because even when mirrors were cooled, they were still hot enough that no quantum behavior would be observable. HangZhou Night Net

In a recent Physical Review Letters paper, a pair of physicists from Arizona State University proposed an experiment that should allow a mirror to be cooled to its lowest vibrational state. Once there, the mirror should remain in the lowest state for a few thousand oscillations before thermal noise will kick it out again. Nevertheless, this would be long enough to observe quantum mechanical behavior—discrete vibrational levels, superposition of the lowest vibrational states, among others things.

Before I describe their proposal, it is important to understand how current mirror trapping and cooling experiments work. Mirrors vibrate due to thermal noise and the pressure from light incident on the surface of the mirror. Essentially, researchers use the radiation pressure from the laser light to counteract the thermal noise to cool the mirror. The experiments are conceptually very simple. An optical cavity is created, where light is reflected back and forth between two mirrors. For optical cavities to hold the light, the distance between the two mirrors must be a whole number of half wavelengths—this is called the cavity resonance. If the color of the light is tuned so that the wavelength is not quite a whole number of half wavelengths, then the intensity of the light at the target mirror oscillates periodically. This means that the light pressure on the mirror oscillates and can drive it into motion. Indeed, if the light color is chosen so that it is just slightly too blue, then the oscillations will reinforce the natural motion of the mirror and heat it up. Counter-intuitively, this is also called a trapping frequency because, although it heats the mirror up, it also forces the mirror to vibrate on a particular set of frequencies. As you might expect, choosing a color that is slightly too red will cool the mirror, however, these don’t limit the mirror to particular frequency sets—hence they are cooling but nontrapping.

If the experiment is that simple, why haven’t we seen quantum mechanical behavior in mirrors already? The answer lies in the nature of the optical cavity. The resonance wavelength for the cavity is, in principle, a precise value. But the mirrors are not perfectly reflecting, so the resonance covers a range of wavelengths. In general, the sharper the resonance, the more effective the cooling is. Except that a very sharp resonance means that the mirrors must be very highly reflecting and the light in the cavity can become very intense, leading to problems such as bistability—where the cavity flips between two different resonance conditions randomly.

Battacharya and Meystre have proposed a slightly modified experimental geometry, which should allow a mirror to be cooled to its lowest vibrational energy state, even if the environment is at room temperature. The difference is that the mirror is placed between two other mirrors, so that two optical cavities are formed. Both cavities are identical, but the color of the radiation put into each cavity is different. In one cavity, radiation that traps and heats the mirror is added, while in the second, light that cools the mirror is used. Now the mirror is trapped to a specific set of frequency values and the light in the other cavity can extract energy and cool the mirror. Although the cooling cavity can still be subject to bistability, the trapping cavity stabilizes it somewhat, which should allow researchers to use much sharper cavity resonances. This, in turn, makes for more efficient cooling.

What this does not tell us is whether a big (e.g., several micrograms) mirror is a quantum mechanical object. However, once operating, this experiment will allow us to explore that boundary in detail.

Physical Review Letters, 2007, DOI: 10.1103/PhysRevLett.99.073601

Apple may enter bidding war for 700MHz spectrum

When you hear about the upcoming 700MHz spectrum auction, the discussion tends to be centered around the rules for the auction, or how the traditional telcos (T-Mobile, AT&T, etc.) feel about it. Recently, news that Google may be planning to bid on a chunk of the spectrum has stirred things up a bit. Google is definitely the new kid on the block in this auction, since it has very little experience in operating wireless networks when compared to other bidders. Google may not be the only ones with little experience and a plan for new services bidding, however. A new Business Week article suggests that Apple is also considering getting in on the auction, a move which would give both Google and the telcos a run for their money. HangZhou Night Net

But why does Steve Jobs need a wireless network? As it turns out, there are lots of reasons Apple would want to own a chunk of a spectrum. The iPhone could benefit greatly from an Apple-owned network, since Apple could then do away with AT&T and offer voice and data service to iPhone customers directly. And of course, a widespread Apple network could mean the ability to purchase content from the iTunes Store, without even needing a WiFi hotspot.

The article also mentions that Apple could use a network for "cloud computing," which would involve all of your Apple devices being connected to a larger wireless network. You could then order a movie on your Mac and have it sent to your Apple TV, as well as buying all sorts of other content whenever you wanted it and wherever you happened to be. Essentially, Apple would be creating an even larger network of products than already exists in order to distribute lots of content, a direction that it is certainly plausible Jobs wants to go in.

In the end, it is not too likely that Apple will actually make a play for any of the 700MHz spectrum. Sure, Apple has got the cash, and it could certainly use the spectrum for some cool stuff, but it may not make sense for the company to get into the network business. Running a large-scale wireless network is a pretty involved task, as well as a money sink. There's a good chance that Apple would contract a large portion of the network operation. Even then, it's still an expensive proposition that brings with it a lot of problems, and we all know how Steve Jobs feels about complexity. Still, if he's that anti-carrier, we may yet see Apple plunk down some cash and make a bid for some spectrum that it can play with.

iPhone software unlock now for sale through various resellers

Get out your credit cards, hopeful (and non-risk-averse) iPhone unlockers—iPhoneSIMfree has finally gone live. After numerous false starts and promises, as well as what might have been nothing more than the obligatory legal warning shot from AT&T, iPhoneSIMfree has opened its doors to resellers. With resellers currently in four countries, the software retails for a whopping $99 in the US and unlocks phones via WiFi. HangZhou Night Net

Be aware, however, that there is no guarantee that Apple won't override this unlock with a future firmware update. While it stands to reason that AT&T is more worried than Apple about unlocking the phone (after all, only AT&T fired up their lawyers when these stories first began surfacing, and unlocked iPhones simply mean Apple sells more hardware), the agreement between the two companies could mandate that Apple is required to take every measure to keep in step with unlocking tools like this. The FAQs both at iPhoneSIMfree's site and the resellers contain multiple questions that tackle this upgrade-proof concern, clearly stating that if Apple does indeed overwrite the unlock with new firmware, neither of these companies will throw you a bone with a new unlocking tool.

Our advice? If you take the leap, be sure to stay on top of firmware news when it's released and wait to upgrade your iPhone until you hear whether the unlock gets overridden.

That said, according to Engadget, all these companies (iPhoneWorldwideUnlock in Australia, 1digitalphone In Germany, in Saudi Arabia and Wireless Imports in the US) so far have licensed the software from iPhoneSIMfree and we expect to see more in the coming months. That is, unless Apple legal and AT&T team up to stomp out the resellers—not out of the question, considering that the DMCA currently prohibits the sale of unlocking software to end-users (in the US). Again, because these companies don't promise to help you out if Apple releases an update that "fixes" the unlock, and since the legal climate on phone unlocking is so murky right now in the US, we urge you to proceed at your own risk if you plan to purchase one of these software unlocks.

Until then, we'll keep waiting for a free software solution.