Microsoft Live

When you consider how large and successful Microsoft has become, I’m somewhat dazzled how much trouble they’ve had getting into the Web 2.0 scene. I’m sure Microsoft doesn’t enjoy playing second fiddle to Google, Yahoo, and other Johnny-Come-Latelys. I suspect leadership is working to address online-presence deficit and imagine internal emails flying around: future of our company is at stake!  What I find really interesting is how Microsoft reacts to this, and their attempts at growing online presence.

Side note: In 1995 Microsoft started focusing on the Internet, with Bill Gates’ famous May 26, 1995 Internet Tidal Wave memo, directing executive staff to attack cyberspace and for “…every product plan to try and go overboard on Internet features.”  In this memo he also said: “Now I assign the Internet the highest level of importance”.

The past decade, Microsoft has struggled to leverage desktop market share into the online world, and they’ve had mixed results. Microsoft’s large marketing and distribution arms aren’t fully applicable to online. That critical mass in desktop-software aspects, isn’t easily transferable verbatim into Web 2.0.

One crux of the issue: Microsoft was built on a software sell/license mentality: sell software units (100 here, 1000 there), divide into different segments (Office, Windows, etc), and business segments complete for market share. The result: more money allocated to the more successful segments for cool.new.applications. Going on twenty years, Microsoft has become a clear expert in developing desktop (non-online) software.

But its a deliberate decision: Is my company a services-company, or software-company? A software-company may generate revenue through selling software units… after a while, you don’t generate revenue from  existing product until you ship a new version (Windows 2000, XP, Vista, etc). This drives: planning, marketing, distribution (Best Buy shelf space), development (C++, VB, .NET, etc), leadership, etc. This drives your company top-to-bottom, from your Steve Ballmer to your Joe Coder.

A software-company can profit by charging for services, but to me, that seems different from building software-as-service company, such as Google. If a company wants to be service-based, then the core revenue model becomes: how much service can I sell, lease, or rent on a monthly basis (which is drastically different from: how many copies of this version of my program can I sell). The longer your software-company has been around… the more software-company infrastructure you have. It can be deadweight when trying to swim in the cool, new, and profitable online-services swimming pool.

I believe before Microsoft can establish a significant online presence, culture and business-model changes may be needed. Such as a Microsoft subsidiary operating completely independently from Microsoft (Skunk Works) and a ground-up online service-company. Otherwise mitosis may be the next best option: divide Microsoft into two separate companies: software-company Microsoft and services-company iMicrosoft. (I think mitosis scenario is about as likely as everyone in the U.S. winning the lottery).

Instead of Skunk Works, or mitosis, Microsoft has chosen: Microsoft Live division. This division is dedicated to the online world, but ends up competing with all Microsoft software-centric divisions. And, from the Microsoft news I’ve read, there’s been an amount of executive turnover in this division. I wonder, if one promising executive after another was wrung-out trying to run a service-company, within a software-company.

On several occasions, Microsoft ventured to buy online market share. Just recently, MSFT submitted a $44.6 billion dollar bid to buy Yahoo. While the results are unknown, I could see Yahoo wanting a lot more money. And, if the sale did go through, I’d hope that MSFT keeps the two companies 99% separate (i.e. Skunk Works) – they could share some branding and technology transfer, but I think pushing Yahoo and Microsoft into one company could result in operation square-peg-in-round-hole.

Since Buying an online presence hasn’t yet yielded Microsoft a major piece of the internet pie. Building is the next best thing. Case in point: Microsoft Office Live, an effort to build a firm foothold in the online world. Simply put, this is an online version of Microsoft Office.

I haven’t had time/incentive to check out Microsoft Office Live. If you have, please drop me a comment and tell me what you think. Personally, I like using the desktop version of Office. I don’t need, nor want, to do that work online – In fact, I don’t see clear benefits to online-izing Microsoft Word. There are plenty of sites doing truly new and innovative things in the online world. Maybe one day Word Processing may go predominantly online. Until then, I’d rather use my traditional desktop version of Microsoft Word (I like the idea of Abiword and Open Office, but I never made the transition)

Since this post is a bit dry (boring), here’s a Microsoft advertisement I just received for Microsoft Office Live. This ad really makes me wonder if Microsoft Office Live is desperately struggling to get people to use Office Live. Maybe I got this impression when the ad called me “chicken” for not trying it. Hey, are we in third grade? Well I double-dog-dare you to make Vista faster and more reliable. OK, I TRIPLE-DOG-DARE YOU!

You calling me Chicken?

Posted in Google, Microsoft, Software Development, Technology | Tagged , , | 3 Comments

Vista 4GB

I recently received a comment from a reader who thought Vista didn’t support 4GB of RAM. I was a bit dismayed at first because 32-bit flavors of Window support 4GB, and they always have (nitpickers: I’m mean 32-bit NT, not 9X). I had taken it for granted that people just know this. I google’d around a bit to try and better understand this misconception. After a few minutes I realized there sure is some confusion over how much RAM 32bit Vista actually supports. I hope I can clear up some misconceptions, shed a bit of light, and not bore you too terribly along the way.

Techno-babble disclaimer! A 32-bit number can contain a value between 0 and 4,294,967,295 (this number is calculated as: 2^32 -1). Because that’s a very large and unwieldy number, we divide it by 1024 which gives us 4,194,304 thousand. Dividing again by 1024 gives us 4,096 million. Dividing once more by 1024 gives us 4 billion. What I am trying to get at is this: a 32-bit number can be used to count up to 4 billion, which is needed to read/write 4 billion distinct memory cells (aka bytes).

Because 32-bit Windows addresses memory through a 32-bit memory addressing scheme, in theory you can address 4GB of RAM in 32-bit Windows. I say in theory, because there are other factors that move this number up or down. Useless techno-trivia: A 64-bit memory addressing scheme can theoretically access 64-bit of memory address space: which is 2^64 – 1, this comes out to: 18,446,744,073,709,551,615 or roughly 4 billion GB.

In 32-bit Windows, addressable memory is divided into two different areas (or modes): Kernel and User. By default, Kernel-mode gets half of the maximum addressable memory: 2GB, and User-mode gets the other half: 2GB. Windows dedicates Kernal-mode memory to drivers and internal Windows data. And, Windows applications get access to 2GB of User-mode memory. (nitpickers: simplified).

When it comes to physical hardware, things get more muddy. It’s fairly common for motherboard (aka mainboard) makers to set hardware limitations on how much RAM they actually support. Your computer’s mainboard may limit accessible RAM to: 3GB, 2.5GB 2GB, etc. On server-class machines you can find motherboards that limit accessible RAM to more: 8GB, 16GB, etc. Since a 32-bit number can only address 4GB of RAM, there are various schemes enabling 32-bit Windows to access more than 4GB of RAM. Microsoft, Intel and various server motherboard manufacturers have extended memory addressing schemes.

Intel has PAE, Microsoft has AWE These extended memory access schemes in general work by moving around a virtual window through your physical address space. For example: I can read the memory between addresses 10GB and 14GB by setting my memory window to start at 10GB. Since we’re moving a memory read/write window around, we pay a slight performance fee.

So, 32-bit Windows can theoretically take advantage of 4GB of RAM and 32-bit Vista is no different. What is different is how Vista uses the RAM and reports what’s available. In Vista, if you have 4GB of RAM and 512MB graphics card, then your available RAM can show 3.5 GB or less. You are still using all 4GB of RAM, but Vista is really trying to be honest by saying you only have 3.5 available. If you have other drivers that grab RAM, that will decrease the amount of available RAM even more. I’ve read some peoples complaints where they only see 3 GB available and 4GB is installed. It depends on your motherboard, installed hardware, drivers, and etc.

If you want more info, check out Microsoft’s: Memory Limits for Windows Releases.

Posted in Microsoft, Software Development, Technology | Tagged , , | 6 Comments

Windows 7

Depending on what you’ve read or experienced: Windows Vista is really great, really bad, or somewhere in between. For the most part, I’m not too excited about Vista. I think there are a bunch of technological improvements, and lots of potential, but overall I’m just not terribly impressed with the completed product. To say the least, I think some major polish and serious optimization are needed. I’ve heard some other people feel the same way.

This brings us to Windows 7, the next version of Windows. There’s a lot of Windows 7 speculation out in the wild. But, not much official information, or even anything concrete. Microsoft really doesn’t like to release official feature lists, or product information until they’re sure they can deliver. That’s really no different than most software companies. What is different, is Windows is not just any software, and Microsoft is not just any software company. So, Microsoft has extra incentive to keep tight-lipped until they’re ready to present something. Microsoft doesn’t like bad press about features they cut, and I don’t blame them.

Hopefully, some Microsoft insider will drop an opinion here and there (sort of like MiniMicrosoft). Hey, perhaps one has. I’ve recently come across an interesting blog on Windows 7. By outward appearances, this appears to be written by an insider. I hope that over the coming months, this blog continues to increase in relevance. I do like information, and I’m quietly cheering for a much improved version of Windows: Shipping Seven (Random thoughts from somebody working on the next Windows OS)

Posted in Microsoft, Technology | Tagged , | 2 Comments

Did Bill Gates Just Say Windows Sucks?

It appears that Bill Gates thinks Windows Vista could have used noticeable improvement before shipping. As I’ve voiced earlier, I agree with Bill. I think it’s good when the people in charge are actually aware of what they are shipping. I also think its good when executives are honest about significant issues with their products – otherwise they appear somewhat dishonest and aloof. I’m not saying executives should continually speak negatively of their products, but I do think they should be honest.

What follows is the short clip from Gizmodo’s recent billg interview. Please recognize this clip is a small excerpt of their whole interview, so Bill’s thoughts may be slightly incomplete (i.e. possibly a bit out of context).

Gizmodo’s Interview excerpt Holy Crap: Did Bill Gates Just Say Windows Sucks?

Posted in Microsoft, Technology | Tagged | 1 Comment

Visual C++ 2008 Feature Pack Beta

Good new for developers and teams working on Microsoft Visual C++ projects!  This past year, Microsoft publicly started re-investing in Visual C++. If you have upgraded to Visual Studio 2008, you can beta test some new and significant functionality coming down the pike for MFC.

This new functionality is part of the optional Visual C++ 2008 Feature Pack Beta. Here is a brief overview:

The Visual C++ 2008 Feature Pack extends the VC++ Libraries shipped with Visual Studio 2008. The VC++ 2008 MFC libraries have been extended to support creation of applications that have:

  • Office Ribbon style interface
  • Office 2007, Office 2003 and Office XP look and feel
  • Modern Visual Studio-style docking toolbars and panes
  • Fully customizable toolbars and menus
  • A rich set of advanced GUI controls
  • Advanced MDI tabs and groups
  • And much more!

This feature pack also includes an implementation of TR1. Portions of TR1 are scheduled for adoption in the upcoming C++0x standard as the first major addition to the ISO 2003 standard C++ library. Our implementation includes a number of important features such as:

  • Smart pointers
  • Regular expression parsing
  • New containers (tuple, array, unordered set, etc)
  • Sophisticated random number generators
  • Polymorphic function wrappers
  • Type traits
  • And more!

Here is a brief tour of new MFC GUI additions.

If you are using VS 2008, here is where you can grab the Visual C++ 2008 Feature Pack Beta.

Posted in Microsoft, Software Development, Technology | Tagged , , , | Leave a comment

The Death of High Fidelity: Rolling Stone

Rolling Stone has an interesting article on a transformation within recording industry: louder music and the resulting loss of fidelity.

In a nutshell, here is what’s going on. As personal computers and portable audio players become more popular, more people listen to music on tiny (low fidelity) speakers. And more people are also listening to music from MP3 and other digital file formats, which compress music and further removes detail. Because people more often listen to music at lower fidelity, record labels are compensating by making music louder… so it catches your ear. This has resulted in an industry phenomenon: louder music begets louder music. It’s a loudness war of sorts, where some record labels are fearful of releasing music that’s not loud enough, or not competitive enough. Gotta be competitive, right?

But this loudness war causes problems. When music is made louder, it actually becomes compressed and you lose significant detail. Instead of a song having loud and quiet parts, it’s all loud. What follows are two sets of diagrams. The first set is a song that’s relatively quiet (what I consider uncompressed).

This is U2: “With or Without You” (Original)

Notice this song’s waveform has peaks and valleys: loud and quiet parts. The song starts quietly (left-most) and gradually builds in loudness (moving to right). Given a reasonably high-quality sound system, and a reasonably quiet listening environment, your ear will perceive the subtle details in this song. In essence, this song has clear detail and fidelity.

Now, lets look at Arctic Monkeys: “I Bet You Look Good on the DancefloorI Bet You Look Good on the Dancefloor

Looking at the waveform, this song is pretty dang loud the whole time. This continual loudness is achieved by compressing the peaks and valleys (i.e. details) until its pretty much a constant peak. This song’s sound is all about constant loudness, and forsakes subtle detail and fidelity. It’s mastered to be an assault on your ears. Which is fine, if that’s what you are looking for.

But, to my ears (and eyes) louder is not better. I much prefer music to have more detail (fidelity), and I think loudness should not be overdone. At least that’s my take. If you are interested how your music sounds, check out: The Death of High Fidelity

Posted in Music, Technology | Tagged , | 1 Comment

Microsoft Emacs.Net

A recent post at meta-doublasp hints at something Microsoft may have in the works: Emacs.Net. For many UNIX and Linux folks, emacs is probably old hat. Otherwise, here’s a quick overview.

Emacs is a popular UNIX text editor, often used to write software, configure systems and edit just about anything. Emacs is mainly used by technical folks, and is pretty easy to configure, extend and tweak if you are inclined. I haven’t used emacs much in the past decade, but it is a handy, light-weight editor also scoring high on the geek cred scale. As far as UNIX file editors go, emacs dates back to 1976 and has been developed in a fashion that retained it usefulness as it has been extended. To contrast with some other tools, emacs has endured the test of time and appears to keep on going. For more on Emacs, please check out the wiki.

Which brings us back to Microsoft’s premier development editor: Visual Studio (VS). I’ve been using Microsoft’s VS for the past decade and overall, each version has gotten bigger and at times more unwieldy, buggier and less usable. That is a shame because Microsoft has been cramming tons of work into adding features. I think on paper, VS ships a great spec: All kinds features, functions and capabilities. But over time, this editor has grown, and for some users, questionable usability and productivity.

One long running joke: Visual Studio was designed by and for executives. The meaning may be: At MS, Executives drive product direction. At other companies, Executives specify technology stacks, including development toolsets/Visual Studio.  What I’d find sad, is if MS executives don’t really know what they’re shipping (Visual Studio), and the other-company executives don’t really know what they’re buying.

A related item: Microsoft .NET. Years ago, when the internet was starting to take off and Executives were asking “how can I cash-in on this internet craze?” Microsoft was starting to sweat how they were going to complete with Netscape and remain competitive in this new internet based world. The MS Windows team was worried about future prospects (i.e. who needs windows is everything is done in-browser), as was the MS Office team (i.e. complete with web apps?), the MS Developer division (i.e. native applications?).

This brings us to .NET, which Microsoft hoped would provide competition to Java (another interpreted and web-centric language) .NET could theoretically help the office team if they wanted to market web-based office applications. And .NET could help the developer division remain competitive by developing a Cool.New.Thing! A bunch of languages (J#, VB.NET, etc) using the same run time (CLR) which run on different platforms (Cool! CRL is its own abstraction mechanism and runtime). Basically, a bunch of different people were jazzed about what .NET could do.

In the spirit of Dogfooding, Microsoft Developer division created a massive all-in-one developer tool to end all developer tools. I suspect many people thought this was a grand idea. Now that this has been completed, I do have to say this tool is a 100 ton whale that cannot get out of its own way. (That’s just my opinion, coming from VS6).  .NET possibly has become something of a challenge for Microsoft. Overall, it’s very functional and provides all kinds of cool capabilities. But…

.NET brings some challenges: auto handling of garbage collection (like java) is really pretty good, but perhaps as good as custom developed allocation and deallocation written in C++ (granted native alloc/free leads to who host of problems) Also .NET imposes a performance hit as CLR is a complete runtime, running on top of the Windows runtime. But, my biggest issue with .NET: its not being used as much as good old fashioned C++. And that’s a travesty. Specifically, MS spent many millions of dollar designing, developing and marketing a product (.NET) that people don’t want. And MS has been spending years pushing C++ developer, teams and products away from C++.  Don’t get me wrong, there are a bunch of people that write applications for .NET, but when you look at industry statistics: there currently are more people, teams and companies writing applications is good old fashioned C++, than in .NET. Since it took some years for Microsoft to realize this, they are now quickly trying to re-invest in their native C++ toolset. So now, Microsoft is saying something like: Hey C++ is a great language! You can use .NET or C++. Really, we love both!

This brings us back to Microsoft’s rumored Emacs.Net. Since .NET has some issues, and Visual Studio is written in .NET, and many C++ developers have gotten fed up with .NET… it’s time for a new tool: Emacs.NET. You may say: huh? It does seem a bit odd, but if I had to guess, I’d think this tool will have interop with .NET but overall, a (hopefully) clean slate design.. with expressed purpose of: continuing to leverage Microsoft’s technologies, while reconnecting with the core developer base (i.e. emacs endures, so lets make an emacs).  This results in a new light-weight tool to be levered against Windows Server core, Powershell and by general technical folks.

These are just my best guesses because I really don’t know. And neither will Microsoft until all is said and done. Developing software is one massive, yet continual, feedback loop. You don’t really know where you’re going, or how it will pan-out, until the product ships.

Posted in Microsoft, Software Development, Technology | Tagged , , , | 5 Comments

Aptera Update + Video

The folks over at Popular Mechanics were able to finagle an exclusive test drive and interview of the upcoming Aptera Typ-1. For those uninitiated, the Aptera Typ-1 is an all-electric plug in vehicle coming to market in 2009 for under $30,000. Sometime thereafter, Aptera plans to market the Typ-2, a 300 MPG gasoline hybrid which shares the Typ-1’s body and features.

The more I keep up with this car, the more I believe this is the first real re-thinking of the automobile since its inception. I know this sounds a little over the top, but the Typ-1’s design is what I consider truly clean and innovative. In fact, I think if this car achieves satisfactory adoption, the rest of the auto industry could very well be shaken up. And that, in my opinion, is a good thing because auto makers seem geared towards larger oil belching monsters.

Looking at the 1907 Ford Model T and today’s 2007 Ford Explorer, I see similarity. Now, when I compare the Aptera Typ-1 and the 2007 Ford Explorer, I see much less similarity. And not just at first glance. Overall, the Aptera really is superbly different and better.

One comparison: The Ford Model T got 13 to 21 MPG. The Ford Explorer gets 13 and 20 MPG. Hmm… maybe not so much different.

Which bring us back to the Aptera, a rethinking of what a car should be. The frame, for example, borrows heavily from the design of boats and planes, where space and weight are at a premium. The Typ-1’s suspension has been lifted from the pinnacle in automobile design: Formula 1. And, the typically boring act of ingress and egress has been improved with swing up doors (as seen on supercars). The driveability is very good, with stable yet responsive handling and strong acceleration. Visibility is also outstanding. The climate control system is a complete rethinking, which uses one heat pump for both cooling and heating.

Since Aptera was able to step back and design a truly revolutionary car, they went all out. Roof-based solar panels provide power and continual cooling, so the car’s interior always remains cool even when the car is powered off. Interior electronics are as impressive as they are functional. Standard fare 1900s-era dashboard gauges have been replaced with video screens showing vital stats and a 180 degree reward view. Traditional center-console old tymey knobs and levers have been replaced with a modern touch screen monitor.

Overall, this really is a massively clean slate design, where weight and aerodynamics have been drastically improved. The body and frame assembly appears to be very safe and solid, and weighs in at a very light 1480 pounds (A Honda Civic weighs rougly twice as much). So, the Typ-1’s 1480 pounds is very light by todays standard, but it’s highly advanced composite frame should provide more than adequate crash protection. And, the aerodynamic coefficient of drag is a mind-boggling 0.11.

The only possible semi-gripe I have is the limited seating: two adults plus one child. In truth, this is really not all that bad and should provide adequate human transportation capacity for many people. If/when the Typ-1 and Typ-2 really start selling, Aptera has a five-seater in the works. And I certainly hope that both the Typ-1 and Typ-2 take off. Once they do, Aptera’s five seater could push auto industry shake-up forcing major auto manufactures to deliver revolutionary transportation for the future of mankind and our planet.

But, it all comes back to people. I think people voting with their pocket books is what’s required to pave the way for even more radical designs, with further decrease in weight and improvements in efficiency.

Here is the Popular Mechanics exclusive video, which includes their test drive, interview and analysis.

For my previous Aptera thoughts, please see: Aptera’s Sub-$30K, 300 Mpg Car and The Future of Cars.

Posted in Technology, Transportation | 2 Comments

Firefox 3 Beta 2 Released

If you’re a developer or person who grabs the latest betas… then heads up: Firefox 3 Beta 2 has just been released. This release is slated to be more secure, easier to use, more personal (improved UI), an improved platform for developers, has increased performance… and fixes over 330 memory leaks.

I am a big fan of Firefox. I also believe it’s in everyone best interest for IE competitors (such as Firefox and Opera) to keep pushing Microsoft. Besides, I like Firefox much more than IE. ‘nuf said.

New changes in this milestone are:

  • Improved security features such as: protection from cross-site JSON data leaks, tighter restrictions on site-specific content using effective TLD service, better presentation of website identity and security, malware protection, stricter SSL error pages, anti-virus integration in the download manager, version checking for insecure plugins.
  • Improved ease of use through: better password management, easier add-on installation, new download manager with resumable downloading, full page zoom, animated tab strip, and better integration with Windows Vista, Mac OS X and Linux.
  • Richer personalization through: one-click bookmarking, smart bookmark folders, location bar that matches against your history and bookmarks for URLs and page titles, ability to register web applications as protocol handlers, and better customization of download actions for file types.
  • Improved platform features such as: new graphics and font rendering architecture, JavaScript 1.8, major changes to the HTML rendering engine to provide better CSS, float-, and table layout support, native web page form controls, colour profile management, and offline application support.
  • Performance improvements such as: better data reliability for user profiles, architectural improvements to speed up page rendering, over 330 memory leak fixes, a new XPCOM cycle collector to reduce entire classes of leaks, and reductions in the memory footprint.

For official details check out the What’s New section of the Release Notes

Here is the Firefox 3 Beta 2 download link (If you don’t want to Beta test, I highly recommend you stick with Firefox 2)

For a technical perspective on Firefox memory leaks (and memory usage), be sure to check out Pavlov, a Firefox developer.

Posted in Technology | Tagged , | Leave a comment

150 MPG Toyota Prius

Since I’ve been writing a rash of car-related posts (Aptera’s 300 Mpg Car, Future Of Cars) this entry seems like a good continuation.

In the first quarter of 2008, A123 Systems and Hymotion plan to market a $9500 USD add-on for the Toyota Prius. This cost, which includes installation, is expected to increase your Prius’ fuel economy to 150 MPG in the city. Pretty dang impressive! If it works reliably and safely: I do have some minor qualms about installing a massive lithium ion battery in my car. I’m not saying the “massive lithium battery in the trunk” idea is completely bad… I just don’t want to Beta test or even production test that.

Also, please be aware that this fuel economy increase incurs the additional cost of electricity to recharge this battery. That’s right, when you park your Prius you will need to plug your car into an electrical extension cord to recharge this additional battery.

For more information and a Video of the Hymotion-equipped Prius.

Posted in Technology, Transportation | 9 Comments