Op-Ed: My Take on Apple WWDC 2019

For years, I have observed Apple’s World Wide Developers Conference from the sidelines purely as an enthusiast. Since moving to Hootsuite, I can now definitively state that I have a proverbial horse in this race – as we publish multiple applications on the iOS App Store. Hence, this year I paid even more attention than usual – and thought I would share some of the findings. 

iOS

This year debuted iOS 13, which is very much targeted at end-user enhancements. Most of note is a native dark mode, which has been desired by end users for many years. The built in applications are all getting a plethora of feature-enhancements with the majority of the investment going into Music, Messages (including support for dual-SIM), Maps (including Street View), Reminders, and Photos. And the keyboard now supports swiping – bringing parity in alignment with 3rd party options and Android. ARKit v3 debuted with people occlusion and enhanced movement tracking along with the introduction of RealityKit, which facilitates the creation of photorealistic renderings of digital assets, animating them, and having them obey the laws of physics. CarPlay is upgraded with a new layout that appears to be imminently more useful. 

Verdict: Dark Mode, CarPlay and end-user application improvements will continue to make the iPhone the most seamless mobile experience in the industry; iOS 13 makes recent-generation devices the most capable augmented reality handset on the market and is no doubt setting the stage for something much bigger in the future. 

 

tvOS

This year saw a more modest update to tvOS. Many of the new capabilities are centered around the already announced Apple TV enhancements from the spring. Of note are multi-user support to enable personalized recommendations and support for 3rd party game controllers. The personalized recommendation support is by far the most desired feature for both my wife and I and will no doubt continue the AppleTV as our primary means of watching television. 

Verdict: A hugely convenient evolutionary release that will no doubt make AppleTV the most popular gaming platform beyond XBox and PlayStation. 

 

iPadOS

One of this year’s biggest announcements was the separation of iPadOS from iOS. First up, Safari now reports itself as a Mac to provide a full “desktop-class” browsing experience. The days of getting a mobile optimized site on an iPad are no more. iPads now support external USB storage and the ability to import photos directly from digital cameras. The Files application has been upgraded to mirror the experience in macOS. Sadly, there does not yet appear to be any feature to fully offline sync cloud libraries, but hopefully this will come next year. Multi-tasking and split screen mode has been greatly improved, including the ability to have multiple versions of an application open (e.g. multiple copies of Microsoft Word to edit different documents). There are new editing gestures to expedite things like copy and paste or undo. A one-handed shrunken keyboard is available for one-handed typing. Pencil latency has been markedly reduced on the latest generation hardware. Mouse support is finally provided, but under the guise of accessibility. And lastly is Sidecar, which allows an iPad to be seamlessly used (via cable, WiFi, or Bluetooth) as a second display for macOS Catalina. Stylus-enabled applications can now receive input directly through the pencil. And it also provides a touch bar mirroring what is on the 2016+ generation MacBook Pros. 

Verdict: iPadOS is some of the most impactful news coming out of WWDC. The improvements greatly close the gap between an iPad and a laptop, all without compromising a touch-first experience – something Apple has been extremely principled about dating back to rare indirect criticism of Windows 8. It is easy to see how these capabilities will be more rapidly evolved in the future, making an iPad the only computer one needs for more and more use cases. For someone like me, it will shift even more time to my iPad from my MacBook Pro, especially when traveling. More interestingly is Sidecar – I believe this is a hat tip to the future of touch in macOS, which has been remarkably touch-free for many years. I envisage a world where the iPad is used to enable touch (or stylus) where it makes sense but without compromising what is a desktop-first operating system. The blend of touch and desktop is an elusive holy grail no one has yet solved, but this seems to be well on its way to achieving that elusive goal in subsequent iterations. Project Catalyst makes this even more likely. 

 

watchOS

The sixth iteration of watchOS brings independence from the iPhone. The Watch will now have its own AppStore and apps can now run completely independently, opening up a variety of new completely disconnected use cases. Several popular iPhone apps are finding their way to the Watch such as Calculator and Voice Memos. Health and fitness continues to steal the show, with several key enhancements including long-term activity trends, a decibel meter to alert to potentially dangerous hearing conditions, and a menstrual cycle tracker. 

Verdict: The most popular health wearable will continue to maintain and grow its lead within the market. The ability to run independently from the iPhone and additional productivity apps will open up new use cases and possibly start to see new categories of broad adoption beyond health and fitness. For me, it will make my Watch even more useful than ever. 

 

macOS

Recent macOS releases have all had California names – this year it’s Catalina. And Catalina brings some of the heaviest investment in macOS in some time. One of the most long-requested features has materialized – the breakup of iTunes. Replacing it are new Music, Podcasts, and TV. Devices can now be directly managed in the Finder. Voice control is now offered as a new means of navigation. Apps such as Home and News will also receive significant improvements. 

Verdict: Outside of Project Catalyst (more on that) and Sidecar, this is a largely evolutionary release that delivers upon one of the most long-requested features to address the user experience behemoth known as iTunes, making it a worthwhile upgrade just for that alone. However, Catalyst and Sidecar hint at much broader ambitions to come…

 

Siri

Apple’s often criticized voice assistant gets a number of upgrades this year. First is a more natural sounding voice. Support has been extended on other devices. For example, on AirPods can read incoming messages and facilitate immediate voice-driven response. HomePods can now differentiate voices to allow for individualized responses. More third party apps can now leverage Siri in CarPlay, facilitating a much richer voice-driven in-car experience. 

Verdict: Although not as useful as Google Assistant or Alexa, Siri provides a solid “third place” but without compromising privacy. The number of additional use cases take something that has been growing increasingly useful, and make it even more so. 

 

HomeKit

HomeKit this year added support for several new categories of devices including routers and security cameras. Security camera footage can be analyzed by a HomePod and then encrypted and stored in iCloud, with 10 days of storage available at no charge. 

Verdict: HomeKit is one of the easiest smart home systems available and one that certainly makes privacy a feature. Although I personally have not gone down the smart home path, if I were to do it, HomeKit would likely be my choice given privacy concerns with other platforms. 

 

Project Catalyst

One of the largest announcements was no doubt Project Catalyst, the project formerly known as Marzipan. Developers can now take iPad applications and target macOS merely through xCode and get applications working with a minimal amount of re-factoring provided that no obsolete libraries are used and there are not too many hardware-specific features. iPad apps were chosen deliberately as they were inherently designed to handle a larger display size, with the hypothesis that the experience will translate better to macOS. This is also Apple being cognizant that an application built for a phone may not work very well on a desktop; something that is painfully apparent when utilizing Android apps in ChromeOS. Several major developers have already committed to leveraging Project Catalyst including Twitter and Atlassian. 

Verdict: Probably one of the most game-changing announcements from WWDC – this sets the stage to invigorate the macOS ecosystem like nothing before. All Mac users can no doubt look forward to many more native applications than have been previously available. Early developer feedback is very positive, suggesting that Apple’s timing is once again right to truly drive a cross-platform developer experience. As well, it sets the stage for further blurring between the iPad and the Mac. It’s easy to imagine where this goes as the iPad gains mouse support and true windowing while the Mac becomes more touch-enabled when using an iPad as a second display/input. 

 

Sign in with Apple

Single sign on has been around the Web for a while, with Microsoft, Google, Facebook, and Twitter all being quite prevalent. Now Apple has entered the fray – but with a unique twist, Privacy. Whereas the others leverage SSO to build up advertising profiles, etc. – Apple is doing the exact opposite. No data is stored. It even goes so far as to provide e-mail address obfuscation to truly maintain a private identity. And, the use of Sign in with Apple is going to be mandatory for the App Store going forward virtually guaranteeing widespread adoption. 

Verdict: This is the 2019 “Apple tax” for developers, but consumers win in the end – at the expense of the other major SSO providers. It will be interesting to see how actual end user adoption goes, but as a privacy minded individual – I can see myself using this. 

 

The Mac Pro & Pro Display XDR

Several years back it seemed that Apple had abandoned the professional user with the 2013 Mac Pro that didn’t seem to solve very many problems for anyone while exiting the display market and pushing users towards MacBook Pros and USB-C/Thunderbolt 3. The rest of the line somewhat languished beyond the outstanding 5K iMac. Professional users were rightfully outraged. 

In a rare about-face, Apple admitted it had gone down the wrong path and re-committed to the professional audience, but with a 2-year tease for a new Mac Pro. In the interim, the core iMac and notebook lines have been refreshed with much better regularity and seen massive specification increases – hindered only by the awful butterfly keyboards. The Mac Mini received an impressive specification boost that nobody thought would happen. The iMac Pro debuted and delivered the most powerful Mac to-date – something far more powerful than the 2013 Mac Pro. All of this led to the speculation that the new Mac Pro and display would be horribly off the mark when they did arrive; the rumor mill was convinced it would be a stackable system of Mac Mini-like enclosures with proprietary connectors while users lamented the old G4-style “cheese grater” chassis. 

In a somewhat shocking move, Apple reimagined the “cheese grater” and delivered the most powerful and expandable Mac ever seen. Industry standards like PCIe are embraced, but Apple made it better with the MPX modules and is encouraging an open ecosystem. At debut, the new system has an impressive array of GPU options plus a FPGA coprocessor for editing up to 3 streams of 8K or 12 streams of 4K video. This is paired with a 32” display that can operate in either portrait or landscape at Retina 6K resolutions with an astounding 1M:1 contrast ratio. The whole setup is ungodly expensive, but ungodly powerful. 

Verdict: Nobody can say that Apple doesn’t care about professionals any longer. There is truly something for everyone from extremely powerful notebooks and iMacs Mac Minis with 3rd party display and GPU ecosystem to the iMac Pro – and now this beast of a setup. It punches far above the weight class of any previous Mac Pro, whose use cases can now arguably be met with the iMac Pro or other aforementioned options. The level of power (and price tag) one can configure is pretty amazing. Likewise, the new display’s specifications are competing with reference displays costing tens of thousands of dollars. When looking at what is being delivered, it is priced competitively to inexpensively depending on the use case. 

As someone that does a lot of virtualization as well as photography (with a budding interest in SLR panoramic photography, which requires a lot of horsepower), this is the Mac Pro I’ve been waiting for. Although an iMac Pro would definitely meet my needs, the modular expandability wins every time. My last Mac Pro lasted from 2006-2016; I can’t say I’ve managed to get a decade out of any other computer I’ve owned. I expect no less here…

 

Closing Thoughts

This was an incredibly exciting WWDC. My key take-aways:

  • The most refined mobile operating system gets even more refined, with a lot of core functional enhancements and a much desired dark mode combined with the best in-car experience available
  • Although by no means winning the AI assistant race in terms of features, Siri has evolved to the point of genuine utility while not sacrificing privacy
  • HomeKit has expanded to pretty much every mainstream use case for a smart home platform, while again prioritizing privacy
  • The Watch and watchOS have emerged into a first-class platform, with a plethora of new use cases opened up with iPhone independence. Only time will tell if the Watch will continue to be a health and fitness first device (and the undisputed leader there) or evolve into something broader. 
  • Apple is truly becoming the “privacy as a feature” company, with this being a core value proposition of virtually every product line and the new Sign in with Apple, which provides a unique means of obfuscating one’s identity over the Internet in what will no doubt become a very widely adopted SSO mechanism given the mandate to leverage it in the App Store. 
  • Apple is more committed to the Mac ecosystem than ever, with the introduction of Catalina, the new Mac Pro, Pro Display XDR, and Project Catalyst, which looks to be on the verge of bringing a multitude of iOS applications to the Mac in an extremely seamless and expeditious fashion. Big name adoption and early positive feedback are indicators that taking the time to mature Project Catalyst was probably the right thing for all. 
  • The iPad is now becoming a true laptop replacement for many more use cases. It’s touch-first user experience is not being compromised in any way and it is clear Apple has even further ambitions in the future. Even more interesting is its fusion as a touch-display accessory to macOS allowing a best-of-both worlds without any compromise. Project Catalyst and harmonizing the developer platform between iOS and macOS hints at likely even greater things to come as iPadOS and macOS converge. 

The fact that iOS was not the star of the show acknowledges what sales and Wall Street have already recognized we are likely at peak iPhone. The renewed focus in broadening and growing the overall Apple ecosystem quashed any lingering doubts I had about Apple’s overall strategy.    

 

Some Other Thoughts

First, a hat tip to my good friend and former colleague Dylan Lloyd, with whom I always enjoy discussing Apple’s latest including this WWDC. 

This is also my first blog post in almost three years. Shame on me. Now that I work for Hootsuite as a social-tech company, I must endeavour to “be more social” and blog more often. Thanks for reading! 

 

 

My Sojourn with Android

I am a global traveler, spending a significant amount of my time in our Ottawa, ON; Copenhagen, DK; and Brussels, BE offices combined with a fair bit of time in the United Kingdom. Being connected is part of my job, which means having a local mobile plan is a must. I have 3 SIM cards (US, Canada, and Denmark with EU-wide coverage). But 3 phones is at least one too many to carry.

I broke down and bought a Samsung Galaxy S7 Edge Dual-SIM unlocked from Amazon. This is the closest Android equivalent (sans the single SIM Nexus 6P, which is what I really would have liked if there were a dual SIM variant) to my workhorse iPhone 6S Plus.

And, after two weeks, I thought it’d be interesting to journal my impressions of Android. On the hardware front, I’d give the Samsung a more favorable impression than Apple on a couple of fronts:

  • Curved display
  • Waterproofing
  • OLED screen

But it is a step backward in terms of camera and storage (as you lose the MicroSD support when using dual SIM’s and I’m not brave enough to try the hack). Battery life is about the same.

On the software side, the two couldn’t be more opposite:

  • Android has a setting for everything; figuring out what is what can be quite confusing and Search is your best friend. But, you can personalize the device until you’d hardly recognize it from the default.
  • All of the app-store applications are there – but some are clearly not as polished or stable as their iOS brethren.
  • Android from Samsung ships with both the Google Apps as well as Samsung Apps + Samsung-based lock-screen and launcher. In theory, you can replace all of these with apps of your choices…
    • In practice, I have not been able to replace the Lock Screen without running into challenges.
    • I have replaced the Launcher and all default applications with Arrow and the applications from Microsoft.
  • Chrome vs. Safari is a matter of personal preference; I prefer Chrome but still end up using Safari given Apple’s reluctance to truly let the browser get replaced on iOS.
  • iTunes is definitely a missing link (though with limited storage I could not sync that much anyways).
  • Dual SIM support is pretty elegant, but you can only have 1 primary SIM working at LTE speeds and you have to manually set which one is primary (which was slightly confusing to figure out).
    • The inability to have separate data roaming settings for each SIM is definitely a missing hole I hope gets fixed in Android Nougat.
  • Software updates to Android seem more frequent than iOS; however OS upgrade-ability is a huge problem. Who knows when this S7 Edge will see Android Nougat?

My overall verdict is that there is plenty of room for both Android and iOS, but that other operating systems are probably a lost cause at this point. How to size them up:

  • If price matters, Android is the way to go, as you can get a very affordable device with a complete app store repertoire.
  • If you want to customize everything, Android is the way to go. You just can’t customize to this degree with iOS.
  • If you want an absolutely seamless, intuitive experience where everything “just works” – then the choice is simple – get an iPhone.
  • Those that are heavily invested in Apple ecosystem’s choice is also very simple: iPhone.

This post wouldn’t be complete without mention of Windows Mobile 10. I very much wanted to be able to buy a Lumia 950 XL. This was the phone I really wanted to buy. Microsoft does dual SIM better than anyone and it has the same polished end-to-end experience as iOS. However, the uninspired industrial design of the 950XL (which is outshone by the lesser 650) coupled with longstanding reports of bugs or carrier feature incompatibility (no WiFi calling) and an absolute lack of applications made it a no-go.

Would Android make me switch from iOS? No…but it is interesting and fun to have it and I will definitely enjoy it while maintaining connectivity in Canada and the European Union (and I guess now Great Britain).

This is also a perfect product management case study of customer segmentation and the room-for-both phenomenon (and conversely the lack of opportunities for Windows Mobile, BlackBerry, or others).

Windows 10 and the Surface: Winning Me Back

To celebrate the 30th birthday of Windows, I figured it was time to share my own thoughts on the operating system.

With the Threshold 2 build of Windows 10 (build 10586) and the third generation of Surface devices (Surface Pro 3, Surface 3 LTE), Microsoft is winning me back. For those that know me well – that may be a surprising statement. But it’s true and I have to give credit where due.

By mid-2006, I had switched all of my personal computing to Intel-based Macs instead of upgrading to Windows Vista. From 2008 onward, I used a Mac almost exclusively as my daily machine and used Windows in a VM or BootCamp only where necessary. Although Windows 7 was good, the hardware available simply wasn’t that great. At the same time I traded my trusty Treo running the original Windows Mobile for an iPhone 3G and never looked back.

Windows 8 and its attempt at redemption – 8.1 – was a disaster. Windows RT was a disaster. The first generation of Surfaces were great to look at and touch – but not that great to use in real-life.

Then, Microsoft woke-up and re-focused itself on its customers. Relentlessly. And its working.

The Surface Mini was killed. Given Windows RT, it rightfully probably never saw the light of day. Windows RT was rightfully killed – though it represented an interesting missed opportunity I may blog about as a different topic.

But, the Surface Pro 3 emerged and it is a genuinely great device. All the rough edges of the first- and second-generation devices were filed off. Now, finally, there was a decent Windows laptop (some software/driver quirks notwithstanding). The Surface 3 LTE gave those that are hardcore mobile users something that was very decent (also with some software/driver quirks). The Surface Pro 4 is nicely providing refinements where needed – and I can’t wait to get one when the model I want is available. And the Surface Book – although controversial – is a very interesting contender against the MacBooks.

Windows 10 is another story. It is actually very pleasant to use. All of the goodness of Windows XP and 7 is brought forward. The good ideas of Windows 8 and 8.1 remain. But its nicely wrapped into a single package. And it delivers an experience and applications that are quite pleasant when faced up against stiff competition from Apple. And, with Threshold 2, it mostly works as expected from a stability perspective.

How do I know its working? I had several devices to pull out on this long Washington DC->Seattle flight tonight; the Surface Pro 3 with Windows 10 on Threshold 2 won. As the Surface 3 or Pro 3 did in many meetings this week while traveling.

And, it’s not just me. I have traveled extensively for the last four years. Airports and airliners are a great litmus test of public opinion globally. Apple was clearly winning. But now I’m seeing Surface Pro 3s/Surface 3s pop up more and more. And most of the ones I see lately are Windows 10. This is a much needed regaining of ground with the high mobility professionals that has been absent for Microsoft for many years. And in an even bigger testament – the Surfaces have captured the interest of my mother, who has used nothing but Apple since 2006.

The recipe for this success is very simple. They are focusing relentlessly on their customers and on delivering a quality user experience. There’s no rocket science to this; its just just very hard work.

On the hardware front, I applaud Microsoft for the Surface. They have brought the much needed cachet back to Windows machines – something the OEM partners were definitely not doing. Although its taken far too long and several iterations, credit where its due for finally building a great desktop operating system and hardware combination.

(This post was written over Idaho on my Surface Pro 3 running Windows 10 Build 10586.)

Misadventures in Home Networking (or creating house-wide 2.4 GHz and 5 GHz coverage)

One of the perils of having a large square footage house built in the 1980s is that it is impossible to get WiFi coverage from a single access point. And there is no Ethernet wiring in the walls. This is the tale of how I actually got this problem reliably resolved such that there is decent WiFi coverage in the house and on the two outdoor patios.

Resolving the Cable Backbone (or lack thereof)

Short of pulling CAT6 through the walls, there represents two technologies to create a virtual cable backbone. Powerline Ethernet (which as its name implies uses the electrical wiring) and MoCa (which uses the coaxial cable for TVs). Both offer several hundred megabits of capacity.

MoCa seemed the ideal solution – except that I was unable to get it to work due to IP address conflicts with the neighbors (seriously!). Turns out (from a very helpful upper tier tech at Comcast) that I probably need a MoCa filter as the cable enters the house – and these did not exist in the mid-1980s.

Instead, I used PowerLine networking with hubs/access points in strategic locations going back to my office, where it is patched into a Gigabit Ethernet switch, which in turn is connected to my router. I plan to go back and revisit MoCa once I can source and try a filter.

WiFi Hardware

For the routers themselves, I went with the Asus RT-87 and am using the open source Merlin firmware. The main one is configured as a router, the second and third as access points. IPs are managed by redundant DHCP Servers running on Windows Server 2012 R2. DNS and WINS are both local, though DNS forwarding is going to Comcast’s DNS servers. The reason I went with this approach is, beyond having greater control, DNS is automatically updated so it makes it easy for other devices to find each other on the network. The other custom setting besides WPA2 AES encryption I enabled was dropping signals if less than -70 to force devices to switch access points while traversing the house.

The WiFi Channel Config

Getting the three access points to play nice with other is no mean feat. The reason is that if they are all configured identically, they will be overlapping with each other. So the 2.4 GHz and 5 GHz radios each need to be configured to not broadcast over each other.

The 2.4 GHz Channel Configuration

2.4 GHz channels can be configured to use one of a handful of channels and 20 Hz, 20/40 Hz, or 40 Hz channel widths. 20 Hz channels should be selected unless one only has a single access point, in which case 20/40 Hz should be selected. 40 Hz likely won’t be compatible with many devices. If any 40 Hz channel width is selected, effectively the entire spectrum is then consumed. So, with three access points, the only channels that don’t overlap are:

  • Channel 11
  • Channel 6
  • Channel 1

– making the choice quite simple, with a cap of 3 semi-overlapping access points.

The 5 GHz Channel Configuration (and where it really gets complicated)

5 GHz channels are a bit more complex, as you can configure 20 Hz, 40 Hz, or 80 Hz channel widths along with respective fallback. Using 20 Hz channels makes little sense, as you’d be unable to leverage the higher bandwidth of the newer WiFi standards. The latest AC standard requires 80 Hz channels.

Using a 40 Hz channel width, you can get Channels 36, 44, 149, and 157 on most access points – none of which will overlap. This means you are limited to four access points. If you want to use 80 Hz channels, then you are limited to two access points – on Channels 36 and 149 only.

The other wrinkle is that the 1xx channels are not visible to devices several years old. But since everything here is new enough (2012+ or could use 2.4 GHz), I went with 40 Hz channel width and Channels 149, 157, and 44. At some point, I may explore enabling 80 Hz frames on Channel 149 for the main floor access point (and most used) and then using the other two at 36 and 44.

Parting Thoughts

This was far more complicated to get right than I ever envisaged. at this point, if I need more coverage, I’ll have to get repeaters given channel saturation. And there is not a lot written on how to set this up. Commercial grade WiFi solutions make this easier – such as a Cisco Meraki system – but those cost far more than anyone would reasonably be willing to invest in home WiFi. Eero looks very promising – but is not yet available. So hopefully this helps you if found in a similar situation.

Home WiFi Misadventures

The point of this post is to hopefully save any reader the aggravation I just went through over the last several hours coupled with the years of missed performance.

The Topology:

The main access point is an Apple Airport Extreme 802.11ac access point connected to a Comcast cable modem at 1GB/s. The access point is connected to a 600MB/s PowerLine network via Gigabit ports and a Gigabit switch for local hardwired clients. There are two Apple Airport Express 802.11n access points connected via the PowerLine network for the upper and lower floors of a three floor home, respectively. Everything was auto-configured using Apple defaults.

The Symptom:

I just purchased a Surface Pro 3. And discovered that its WiFi speed was 3MB/s. After resolving several Microsoft-related issues, I boosted performance to a whopping 15-20MB/s. Not settling for this, I spent several hours researching…

The Problems:

I discovered several issues upon troubleshooting everything:

  • The out-of-box defaults use a single SSID for both the 2.4 GHz and 5 GHz band. It turns out that a number of Windows systems do not like this configuration and will end up on the 2.4 GHz band. Based on anecdotal testing, I may have had some Apple devices in this category as well. 
  • Out-of-box defaults left no control over channels. Clearly I had some channel overlap between the access points going, which explained why performance degraded when devices were active across more than one access point.

The Solution:

  • Configure a separate SSID for the 2.4 GHz and 5 GHz ranges. Point any 5 GHz capable device to the 5 GHz SSID. 
  • Manually configure the channels for the 2.4 GHz ranges
    • Channels 1, 6, and 11 were used – as they do not overlap. Note: You can really only get 3 2.4 GHz access points in a given location before you start getting channel overlap as the spectrum used per channel overlaps with its neighbors (e.g. – Channel 1 overlaps with 2, 3, 4, and 5).
  • Manually configure the channels for the 5 GHz ranges
    • Channels 36, 40, and 44 were used; there are varying reports that these would slightly overlap but no issues were seen in routine testing

Now, 5 GHz devices routinely get 100MB/s+ from the Internet when on the main access point and 50-60 MB/s when on a remote access point going through the PowerLine network. 

Hope this helps!

Lots of Good Stuff from Build and Beyond

So far, Microsoft has done a very good job of impressing me the last week. In fact, I have seen some of the best decision-making out of the company in years. Although a lot of this may be due to One Microsoft and other forces already in play, some credit also certainly goes to Satya Nadella in his new role as CEO.

In short, what I saw that I liked:

  • Office for iPad – The overnight success of this shows that Microsoft has been leaving money on the table, even with a 30% cut going to Apple. Given its a v1.0 and is this polished, I look forward to what the updates will bring. Hopefully they will update it frequently in the style of a true iOS application and not on a Microsoft ship cadence. It has earned overnight status in my daily-use list and is hopefully a harbinger of a lot more to come of delivering great experiences on other platforms.
  • Scott Guthrie – I worked for Scott directly for two years. I would count those as amongst the absolute top highlights of my career. He is a rare mix of technical excellence, but more importantly – customer focus. Including looking beyond the boundaries of the Redmond campus. He is absolutely the right choice to deliver technically innovative products that will do what people want. Congrats to him and kudos to Satya.
  • Universal Applications – This has long been one of Microsoft’s biggest opportunities. They finally delivered. And offered some decent migration strategies to boot. Microsoft built much of its success on developers, but that has gone by the wayside for a while. This felt like the start of them getting their groove back. The Xbox One support is a fantastic touch and key differentiator – and may end up seeing me get one even though I am a staunch anti-gamer in the wake of the discontinuation of Flight Simulator…
  • Cortana – This one exceeded expectations. Microsoft played to its strengths and brought in context from all of the other things they have the ability to touch, such as your inbox and calendar. And gave you the extensibility to teach it (also playing to one of their historical strengths not seen lately). They acted uncharacteristically cool by maintaining the codename from the Halo franchise, using the actor from the Halo games, and hiring screenwriters to make the dialogue entertaining. Now, can we get a desktop and tablet version please? Maybe, unlike Google Now, it won’t think I work at the local QFC (another story)…
  • Mea Culpa on Windows 8 Desktop – The renewed focus on the desktop was great to see. It’s what people fundamentally want. Tim Cook was right on this one; you have to have a device appropriate experience. Which Windows 8 failed to deliver. And coupled with the lack of applications on the tablet side, it turned into a disaster. Giving people the experience they want and a true universal platform gives it hope…we shall see.
  • Free Windows on < 9″ Devices – This was the unexpected one (along with offering O365 via the Apple App Store). Microsoft has long held onto its licensing policies with religious fervor. Seeing them be pragmatic with respect to where they are at in the market was a refreshing change, and one guaranteed to not have happened under the previous regime.

What was unimpressive (beyond Cortana) was Windows Phone 8.1, sans the very elegant dual-SIM support (very key for international travelers like me or those in emerging markets where multiple carriers are a reality). This was a me-too play that should have been there all the way along.

Overall, my expectations have been exceeded. Kudos to everyone in Redmond – and especially all of my friends and former colleagues. Congratulations on delivering some of the best in a very long time and finding some of the right grooves again.

Converting from Parallels to Hyper-V

You would think converting a virtual machine from one format to another would be simple. Especially when there are multiple blogs out there on how to do it. But, of course not – so time for another misadventure in computing post. I was asked by a colleague to get a copy of a demo virtual machine for some upcoming analyst activities. The problem: my colleague’s system is a Surface 2 Pro running Hyper-V on Windows 8.1. And the source system was a personal Retina MacBook Pro running Parallels 9. And the VM is running Windows Server 2012 R2.

The core issue was that the source image in Parallels was a virtual UEFI system with a GPT partition table. The destination image was a Gen 1 Hyper-V VM with a virtual BIOS, which necessitates a MBR partition table. (I was unable to get the Gen 2 Hyper-V VM that supports UEFI to work given the differences in hardware drivers.) 

The first problem is actually converting the disk file itself, as they are in very different formats. And the steps of using VMWare and VirtualBox utilities to do the conversion did not work – I ended up with a corrupt virtual hard disk (probably because of this GPT issue). 

So to do this, I found the awesome Disk2VHD utility from SysInternals (http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx). With this utility, I ran it and chose not to grab anything except the C:\ partition and voila – I created a new VHD that I was able to load up on my Hyper-V server and at least not have corruption.

Then, the fun of trying to get it to boot began. The normal way to fix a modern Windows system that will not boot is:

  • Boot from Windows installation image
  • Go to Command Prompt
  • Enter the following commands:
    • “bootrec /fixmbr”
    • “bootrec /fixboot”
    • “bootrec /scanos”
    • “bootrec /rebuildbcd” – except this critical last step fails because the partition table is not MBR.

So, you have to convert the partition table to MBR. And that’s where life gets interesting. After trial and error, I found the following to be the best way to do this without losing data:

  • Download the latest Live x86_x64 Fedora image from http://www.fedoraproject.org/. Yes, you really need a live Linux image because of a really handy utility called gdisk.
  • Boot the VM with the Fedora Live image and login to the desktop.
  • Run the Terminal utility.
  • Then, run the following commands:
    • “su -” – get into administrator mode
    • “yum -y install gdisk” – actually install the gdisk utility
    • “gdisk /dev/sda” – start running gdisk against the virtual hard drive
    • “r” – to enter gdisk recovery/transformation mode
    • “g” – to convert from GPT to MBR
    • “p” to preview the converted MBR partition table
    • “w” – to write changes to disk.
  • At this point, the VM can be rebooted with Windows installation media to get to a Command Prompt to fix the rest of the problems. Once there:
    • “diskpart” – enters the Windows disk partition utility
    • “select disk 0” – selects the boot drive
    • “list partition” – should show the disk partitions present. The largest one is the actual one that you want to boot. It is probably going to be the third or forth partition on the disk. In my case it was the fourth.
    • “select partition 4” – select the partition we want to boot.
    • “active” – to mark the partition as active in MBR.
    • “exit” – to exit diskpart
  • The next step is to find the actual disk itself – it is probably going to be D:\ (at least it was on my system) because of the reserved GPT partitions in advance of the actual usable partition. Once you determine the drive letter, you can proceed as follows:
    • “bootsect d: /nt52 /force /mbr” – makes D: bootable.
    • “bootrec /fixboot” – fixes core startup environment
    • “bootrec /fixmbr” – fixes core startup environment
    • “bootrec /scanos” – find the OS; note it will probably be D:\Windows in my example (and this is OK)
    • “bootrec /rebuildbcd” – update the BCD environment; note it will be D:\Windows in my example (and this is OK)
  • At this point the system can be rebooted without any installation media – and it should just boot up with everything in C:\Windows\ as it should be. Once booted:
    • Uninstall Parallels Tools
    • Install Hyper-V Integration Services (if required – not applicable on this system since Windows Server 2012 R2 has them built-in)
    • Reactivate Windows

And voila! I hope this helps.