Although not constitutionally mandated, the United States has long held its elections on the first Tuesday after the first Monday in November. The why behind such a decision is beyond my personal knowledge, it's just the way we do things. For the most part, the phrase "first Tuesday after the first Monday in November" can be restated as "the first Tuesday in November" because six out of seven times, Monday will come before Tuesday in the month of November.
Not this year! This year election day is the 8th of November because the first day of the month is a Tuesday. Which brings us to the state of Colorado, home of the Taxpayers Bill of Rights (a.k.a. Tabor). Tabor is a government growth act adopted by the people of Colorado several years ago which keeps taxes and spending low. Washington State has something similar known as I-601, but it's far more relaxed. The drafters of Tabor, in their infinite, government crippling wisdom, mandated that any changes to Tabor must be put to a vote on... wait for it... the first Tuesday of November.
So Colorado, home of the nation's most restrictive spending limits, must hold its Tabor election on the 1st, and then the state wide elections for every other race on the 8th. Now that's good government spending.
Saturday, October 29, 2005
Monday, October 10, 2005
Regulating Network Peering
The economic forces behind the internet remain, for me, the largest mystery of what makes the whole thing go. I understand the client/server relationship, posses a decent grasp of the seven layers of network communication, and even get DNS. But how companies who actually own the network make money is something that has always puzzled me.
As part of the server collocation service I use for LegSim I pay a fair chunk of the monthly bill for bandwidth. I assume the facility then takes that money, skims off a percentage, and uses the rest to buy bandwidth from someone else... who in turn does the same with all of the various networks it connects to, and so and and so forth until all of the money is dispersed to all of the networks on which my users reside.
The only problem with this scheme is transaction costs (as with so many things in life). Monitoring the connectivity of every network to every other network must be a massive undertaking, resulting in tremendously complex billing. Which is why networks often use a billing method called peering. With peering, networks of similar sizes agree to charge eachother nothing on the assumption that they are both using eachothers resources equally and thus it will all cancel itself out in the wash--peering does it all without the transaction costs.
Seems this model is beginning to breakdown. Last week two sizable network owners disagreed on whether they were comparably sized. As a result, one of them shut off connectivity to the other until the other forked over payment. That didn't end well for either of them and now my telcom hero Congressman Rich Boucher of Virginia is telling the industry he plans to introduce legislation to allow the FCC to regulate peering deals. Supposedly the new FCC power will be limited to traffic cop status, allowing them to resolve peering disputes between network competitors. But if the history of the FCC is any indication, I doubt it will take very long for the traffic cops to start erecting checkpoints and jersey barriers in the name of network reliability.
As part of the server collocation service I use for LegSim I pay a fair chunk of the monthly bill for bandwidth. I assume the facility then takes that money, skims off a percentage, and uses the rest to buy bandwidth from someone else... who in turn does the same with all of the various networks it connects to, and so and and so forth until all of the money is dispersed to all of the networks on which my users reside.
The only problem with this scheme is transaction costs (as with so many things in life). Monitoring the connectivity of every network to every other network must be a massive undertaking, resulting in tremendously complex billing. Which is why networks often use a billing method called peering. With peering, networks of similar sizes agree to charge eachother nothing on the assumption that they are both using eachothers resources equally and thus it will all cancel itself out in the wash--peering does it all without the transaction costs.
Seems this model is beginning to breakdown. Last week two sizable network owners disagreed on whether they were comparably sized. As a result, one of them shut off connectivity to the other until the other forked over payment. That didn't end well for either of them and now my telcom hero Congressman Rich Boucher of Virginia is telling the industry he plans to introduce legislation to allow the FCC to regulate peering deals. Supposedly the new FCC power will be limited to traffic cop status, allowing them to resolve peering disputes between network competitors. But if the history of the FCC is any indication, I doubt it will take very long for the traffic cops to start erecting checkpoints and jersey barriers in the name of network reliability.
Sunday, October 09, 2005
Indexing Everything
There are moments in technologyI consider revolutionary. Moments like the realization that all data can be stored with nothing more than a simple bit toggle, distinguishing the network from the data, creating the graphical user interface... events that fundamentally alter everything.
Google Chief Executive Eric Schmidt is quoted by CNet stating that Google will complete indexing all the world's knowledge in 300 years. This is a revolutionary moment. Stop for a second to consider how long 300 years is... done? Good. Because what is remarkable about that statement, in the truly revolutionary sense, is not the time period, but the conceptual possibility of indexing all of the world's information. It is nothing short of aweinspiring.
Returning for a moment to the issue of time... I wonder if that is based on a pure mechanical understanding of the process of indexing, or if that also considers how long it will take to gain access to all of the world's data? The cost of information climbs everyday, and shows no sign of slowing. One who controls the data possess a unique advantage one is unlikely to relinquish without a fight. How exactly does Google plan to convince those "in the know" to let the rest of us in?
Google Chief Executive Eric Schmidt is quoted by CNet stating that Google will complete indexing all the world's knowledge in 300 years. This is a revolutionary moment. Stop for a second to consider how long 300 years is... done? Good. Because what is remarkable about that statement, in the truly revolutionary sense, is not the time period, but the conceptual possibility of indexing all of the world's information. It is nothing short of aweinspiring.
Returning for a moment to the issue of time... I wonder if that is based on a pure mechanical understanding of the process of indexing, or if that also considers how long it will take to gain access to all of the world's data? The cost of information climbs everyday, and shows no sign of slowing. One who controls the data possess a unique advantage one is unlikely to relinquish without a fight. How exactly does Google plan to convince those "in the know" to let the rest of us in?
A Sad Day
After much resistance, today I turned on the word verification service provided by Blogger. This will keep the bots from visiting my blog and leaving delightful comments. Many of the comments are quite hysterical, reflecting a very sophisticated parsing algorithm. Most actually provide some sort of lucid comment about the post, and then advertise any number of "interesting" services, my favorit being a site about "coffee table" (singular, yes). The best part of the bot comments is that it made it appear as if people were actually reading my blog... but I think that the appearance is officially outweighed by the annoyance.
Thursday, October 06, 2005
How is This Not an Antitrust Violation?
At a recent Linux event a top Microsoft exec in charge of platform strategy was recorded as saying that Microsoft would not be releasing a version of MS Office for Linux. This of course solicited the standard response from the FOSS community that we don't need Microsoft to make a top-notch productivity suite and that we're better off without them. I'm not so sure about that, but I have a different beef with the comment. As usual, a quote is illustrative
The problem is that Microsoft is a known monopolist. There is no legal question about that, and thus is held to a higher standard under out antitrust laws. One of those standards is that it cannot use its monopoly status to maintain its monopoly. Under US law its not bad to be a monoploy, only bad to act like a monoploy. When Microsoft turns down an opportunity to expand the MS Office install base into the Linux world because it wants to shore up support for Windows, that is using its monoploy power to sustain an existing monopoly. No reasonable industry competitor should turn down an opportunity to expand into a growing market, and our laws are in place to ensure that the improper incentives of industry consolidation and predatory pricing don't get in the way of serving consumers and fostering competition. The only remaining question for me is whether the Federal Trade Commission or the DOJ will actually enforce the law?
Microsoft is 100 percent focused on Windows: We have invested billions of dollars in it. We have created Office for the Mac but--and I thought I had been clear on this already when I said 'No'--we have no plans at this time to build Office on LinuxSeems like the standard line you would expect from a platform strategist, it being strategic and all.
The problem is that Microsoft is a known monopolist. There is no legal question about that, and thus is held to a higher standard under out antitrust laws. One of those standards is that it cannot use its monopoly status to maintain its monopoly. Under US law its not bad to be a monoploy, only bad to act like a monoploy. When Microsoft turns down an opportunity to expand the MS Office install base into the Linux world because it wants to shore up support for Windows, that is using its monoploy power to sustain an existing monopoly. No reasonable industry competitor should turn down an opportunity to expand into a growing market, and our laws are in place to ensure that the improper incentives of industry consolidation and predatory pricing don't get in the way of serving consumers and fostering competition. The only remaining question for me is whether the Federal Trade Commission or the DOJ will actually enforce the law?
Tuesday, October 04, 2005
Interpreting the Constitution: Originalism
There is a popular method of interpreting the Constitution among conservatives called "originalism." The theory says that if there is ambiguity in the words of the Constitution you must revert back to what the original drafters believed it to mean. This way of thinking is most popular in area of 7th Amendment jurisprudence where the drafters used the term "common law." We use the conception of common law from 1791 as the basis of what is and is not common law for the purpose of the 7th Amendment.
Conservatives, especially social conservatives, like originalism because it cannot be used to justify a constitutional right to an abortion. Since the words cannot be found in the document, and the drafters of the Constitution, Bill of Rights, or the 14th amendment had no intention of protecting the right to choose at the time of adoption, there is no room for such a right under the interpretive scheme. It also means a narrower reading of the 1st Amendment, the Commerce Clause, and a whole host of other items that allow the Federal government to get big and powerful.
Today I learned that originalism has a serious flaw, more so than the obvious problem that the Constitution shouldn't be a stiff unbending document. It has to do with Brown v. Board of Education. There is an old saying about Constitutional theories: if you cannot arrive at the belief that Brown was rightly decided, then you don't have a good theory. Problem being that the Congress which adopted the 14th Amendment, the Amendment which the Court in Brown said made segregated schools unconstitutional, approved of segregated schools in Washington, D.C. the same year it ratified the 14th.
Seems to me that represents pretty clear original intent that segregation was all fine and good. Which begs the question... do we really want justices who believe original intent is such a great interpretive theory?
Conservatives, especially social conservatives, like originalism because it cannot be used to justify a constitutional right to an abortion. Since the words cannot be found in the document, and the drafters of the Constitution, Bill of Rights, or the 14th amendment had no intention of protecting the right to choose at the time of adoption, there is no room for such a right under the interpretive scheme. It also means a narrower reading of the 1st Amendment, the Commerce Clause, and a whole host of other items that allow the Federal government to get big and powerful.
Today I learned that originalism has a serious flaw, more so than the obvious problem that the Constitution shouldn't be a stiff unbending document. It has to do with Brown v. Board of Education. There is an old saying about Constitutional theories: if you cannot arrive at the belief that Brown was rightly decided, then you don't have a good theory. Problem being that the Congress which adopted the 14th Amendment, the Amendment which the Court in Brown said made segregated schools unconstitutional, approved of segregated schools in Washington, D.C. the same year it ratified the 14th.
Seems to me that represents pretty clear original intent that segregation was all fine and good. Which begs the question... do we really want justices who believe original intent is such a great interpretive theory?
Policed by our own Property
Slashdot is often a good way to discover outside resources--links to CNet and CNN that I might not otherwise read. It's like having 100,000 people reading everything about everything, figuring out what might appeal to a geek like me, and then posting it. So I appreciate slashdot as a source of outside news. But that appreciation rarely extends to the actual Slashdot content.
Comments, editorials, and the posts themselves are often of poor substance. I only read at +5 (the highest level of moderation) and even then the posts are rarely worth my time. But today I encountered something worthy of a direct link from my blog to a slashdot post. Behold.
The referenced article itself is interesting, but what I really liked about this post what the phrase "Policed by our own property" in reference to Digital Rights Management (DRM). I think a lot about DRM, but don't write much about it because I'm really of several minds on the subject. On one hand, I don't like the idea of giving more control to content producers... seems like they have enough with current copyright law. But, on the other, I believe that DRM could be designed to create a more efficient way of distinguishing freely accessable works from those which must be paid for.
Consider for a moment if all digital works were wrapped in a single common DRM. That DRM would certainly identify improper use, with all of those problems, but it could just as easily announce to one and all, "Share me with everyone and make new works of great wonder." I think that could be a real boon for the Creative Commons and similar organizations.
But I think that this little phrase from Slashdot has changed my thinking. This isn't just an economic policy issues that can be answered with efficiency models. A shifting in viewpoints is required. I suggest the problem is one of fundamental liberty. I begin with the simple question: is it consistent with the American political philosophy to empower our car to decide that it can go on certain roads and not others? Note that's different from saying that the law can make that decision and the cops can enforce it. My example gives the authority of decisions and enforcement to the car. Our property becomes the judge of our actions, but this judge isn't going to be interested in so called extenuating circumstances.
DRM is no different. I pop in a DVD and the DVD decides if it's going to play or not... or maybe my computer, which I own, decides for me. The decision is made not based on choices I made, but on choices others have made. We would be policed by our own property, and that strikes at the very liberty American political philosophy claims to be all about.
Comments, editorials, and the posts themselves are often of poor substance. I only read at +5 (the highest level of moderation) and even then the posts are rarely worth my time. But today I encountered something worthy of a direct link from my blog to a slashdot post. Behold.
The referenced article itself is interesting, but what I really liked about this post what the phrase "Policed by our own property" in reference to Digital Rights Management (DRM). I think a lot about DRM, but don't write much about it because I'm really of several minds on the subject. On one hand, I don't like the idea of giving more control to content producers... seems like they have enough with current copyright law. But, on the other, I believe that DRM could be designed to create a more efficient way of distinguishing freely accessable works from those which must be paid for.
Consider for a moment if all digital works were wrapped in a single common DRM. That DRM would certainly identify improper use, with all of those problems, but it could just as easily announce to one and all, "Share me with everyone and make new works of great wonder." I think that could be a real boon for the Creative Commons and similar organizations.
But I think that this little phrase from Slashdot has changed my thinking. This isn't just an economic policy issues that can be answered with efficiency models. A shifting in viewpoints is required. I suggest the problem is one of fundamental liberty. I begin with the simple question: is it consistent with the American political philosophy to empower our car to decide that it can go on certain roads and not others? Note that's different from saying that the law can make that decision and the cops can enforce it. My example gives the authority of decisions and enforcement to the car. Our property becomes the judge of our actions, but this judge isn't going to be interested in so called extenuating circumstances.
DRM is no different. I pop in a DVD and the DVD decides if it's going to play or not... or maybe my computer, which I own, decides for me. The decision is made not based on choices I made, but on choices others have made. We would be policed by our own property, and that strikes at the very liberty American political philosophy claims to be all about.
Does the GPL Hold Back Linux?
I marked this post from a ZDNet blogger because when I first read it I got very upset. His thesis is that the GPL holds Linux back. I happen to be a fan of the GPL, so it's not an argument to which I'm very sympathetic. However, recent GPL 3.0 discussions suggest a wise observer ought to listen to as many voices as possible... you just never know if something valuable might be said.
The problem with Mr. Murphy's argument is that the proposition doesn't follow the arguments. He starts with the idea that linux adoption has slowed, which I'm willing to grant for sake of argument, and posses the question as to why? He discards the popular theory of Microsoft as better innovator, which makes sense to me since there hasn't been a Windows release since 2002. He advances the theory that as external factors like press popularity faded, internal issues became more apparent. Chief among those internal issues: the GPL.
But here's where the argument falls apart. A direct quote is illustrative
What I don't understand is how the GPL is responsible? Microsoft was going to attack the underlying notion of opensource regardless of the license so long as it threatens their marketshare. SCO is suing under a contracts claim with only incidental GPL claims. So those two cases can't be evidence of the GPL holding back Linux... and as for supposed legal uncertainty, license drafters will tell you that the clauses which comprise most tech agreements have not been tested by courts, so there is no greater uncertainty with the GPL than something you can get drafted by Preston, Gates, and Ellis. More importantly, application developers can write for Linux without getting anywhere near GPL'ed code. Many proprietary software products work on Linux.
The GPL certainly has it's issues, and some of the FOSS luminaries have recently taken aim at the idea of mandatory share-a-like clauses (the heart of the BSD v. GPL debate) but I don't see how the GPL is what is holding back Linux. The GPL single-handedly makes FOSS development possible by reducing transaction costs (putting aspiring FOSS lawyers like myself out of business) and creating a broad assortment of code from which development can take place. Without the GPL, chaos would rule the sharing of code, requiring expensive lawyers, and serve to undermine Linux far more than any suspected legal uncertainty.
The problem with Mr. Murphy's argument is that the proposition doesn't follow the arguments. He starts with the idea that linux adoption has slowed, which I'm willing to grant for sake of argument, and posses the question as to why? He discards the popular theory of Microsoft as better innovator, which makes sense to me since there hasn't been a Windows release since 2002. He advances the theory that as external factors like press popularity faded, internal issues became more apparent. Chief among those internal issues: the GPL.
But here's where the argument falls apart. A direct quote is illustrative
Basically, legal issues, or the threat of legal issues, caused some key applications developers to back off Linux while the general negativism of Linux marketing caused many of the individuals whose innovations should have been driving Linux adoption to hang fire until MacOS X and Solaris for x86 under the CDDL came along.So, if I get the theory right, because competitors who were losing business to Linux sued IBM on a contract claim and Microsoft went on a gang-busters publicity drive against the viral-GPL, people stopped adopting linux.
What I don't understand is how the GPL is responsible? Microsoft was going to attack the underlying notion of opensource regardless of the license so long as it threatens their marketshare. SCO is suing under a contracts claim with only incidental GPL claims. So those two cases can't be evidence of the GPL holding back Linux... and as for supposed legal uncertainty, license drafters will tell you that the clauses which comprise most tech agreements have not been tested by courts, so there is no greater uncertainty with the GPL than something you can get drafted by Preston, Gates, and Ellis. More importantly, application developers can write for Linux without getting anywhere near GPL'ed code. Many proprietary software products work on Linux.
The GPL certainly has it's issues, and some of the FOSS luminaries have recently taken aim at the idea of mandatory share-a-like clauses (the heart of the BSD v. GPL debate) but I don't see how the GPL is what is holding back Linux. The GPL single-handedly makes FOSS development possible by reducing transaction costs (putting aspiring FOSS lawyers like myself out of business) and creating a broad assortment of code from which development can take place. Without the GPL, chaos would rule the sharing of code, requiring expensive lawyers, and serve to undermine Linux far more than any suspected legal uncertainty.
Sunday, October 02, 2005
Quote Worth Remembering
From the New York Times
Justice Breyer, interviewed by Mr. Stephanopoulos in connection with his new book "Active Liberty: Interpreting Our Democratic Constitution," declined to say whether he thought the president should nominate a woman to replace Justice O'Connor. For him to comment, Justice Breyer said, would be like "seeing the recipe for chicken a la king from the point of view of the chicken."From the point of view of the chicken!
Top 100 Public Intellectuals
Foriegn Policy is circulating a list of the Top 100 Public Intellectuals in the world. I found out about the list through the ACS Blog who thought the list was interesting/controversial because it contained only 10 women. I'll leave comment on that issue to those who know more about the gender divide, but the issue did get me thinking... what other kind of demographics do we get out of this Top 100 list? With some quick spreedsheet work, here are some interesting statistics.
Top Professions:
Oh, and Prof. Lessig made the list :)
Top Professions:
- 9 Economists
- 8 Novelist
- 8 Philosopher
- 6 Historians
- 4-5 Religious Leaders (depending on how you define religion)
- 7 Political Scientists/Theorists
- 2 Politicians
- 32 United States
- 13 Great Britian
- 5 China
- 4 France
Oh, and Prof. Lessig made the list :)
Saturday, October 01, 2005
My New Haircut
As promised, I have exchanged money for services in the form of a new haircut. I went to a nice place just off Roosevelt called derby, who even maintains a website. The experience was infinitely better than previous hair cutting experiences, and well worth the price. Without further ado, here is a photo of me with the new style.
The first thing you should note is that the photo is a touch blurry. That's cause it was taken indoors without a flash, so that's the price you pay. The first thing you will note is that the cut doesn't look all the different from previous cuts. While true, I think this one will grow in a lot nicer and that the sides and back will be better after a few weeks of growth. Time will tell.
As an extra bonus, here is a picture of me in the jungle:
The first thing you should note is that the photo is a touch blurry. That's cause it was taken indoors without a flash, so that's the price you pay. The first thing you will note is that the cut doesn't look all the different from previous cuts. While true, I think this one will grow in a lot nicer and that the sides and back will be better after a few weeks of growth. Time will tell.
As an extra bonus, here is a picture of me in the jungle:
Subscribe to:
Posts (Atom)