Barcelona 2008: Parc Guell


Parc Guell #6
Originally uploaded by rdonovan

For my second photographic expedition during the 2008 Barcelona trip, I made it to Parc Guell. Besides being an amazing vista, it is home to a considerable amount of architecture and artwork by Gaudi. I managed to capture both – at dusk.

This expedition required a significant use of flash photography, as I was far too late in the day to have enough natural light to capture Gaudi’s works. The SpeedLight 430 EX came in quite handy. As did the image stabilizing 100-400mm and 24-105mm lenses – for the panoramic photographs – as I had no tripod.

Enjoy the results at: http://flickr.com/photos/rdonovan/sets/72157611650634380/.

Advertisements

Some Updates to the CS2009 Connect Site

After getting some feedback over the holiday week, we have made a few improvements to the Connect site for Commerce Server 2009 to improve accessibility. If you are already registered and are having difficulty accessing the README and Installation guide, please visit https://connect.microsoft.com/SelfNomination.aspx?ProgramID=2838&pageType=1&SiteID=643. Otherwise, please visit https://connect.microsoft.com/site/sitehome.aspx?SiteID=643 for first time visitors. Hope this helps!

Announcement: Commerce Server 2009 & December CTP Availability

Today is another major day for Commerce Server. Commerce Server 2009 (CS2009) is now available, in a feature-complete community technical preview. (Previously incomplete builds of CS2009 have been available under the guise of Commerce Server 2007 Codename “Mojave”.) This release represents a significant cumulation of effort on many different fronts. It’s key features include:

  • Multi-Channel Commerce Foundation – Debut of the new programming model for Commerce Server going forward + enhanced e-commerce capabilities.
  • SharePoint Commerce Services – Integration between Commerce Server and SharePoint technologies (WSS and MOSS) + a complete SharePoint-based out-of-box shopping experience.  
  • Culmination of many internal updates over Commerce Server 2007, SP1, and SP2.

To get the bits, please visit:

Please submit your feedback and questions through the Connect Product Feedback Center.

In the days and weeks ahead, I will be posting a lot more about CS2009, so stay tuned. In the meantime, check it out and enjoy. Be seeing you!

Wrap-Up: Commerce Server 2007 Architecture Series

The Commerce Server 2007 architecture series I started the other month is now complete. My hope is that this information is useful for those seeking to get acquainted with Commerce Server – and to learn some of the caveats one must consider when using the product. Additionally, hopefully some of the context shared will provide insight on why certain things work in the manner in which they are implemented within the product itself.

The suggested reading order is:

  1. Philosophy – http://blogs.msdn.com/rdonovan/archive/2008/09/01/cs2007-software-architecture-series-part-1-philosophy.aspx
  2. Platform Basics – http://blogs.msdn.com/rdonovan/archive/2008/10/28/cs2007-software-architecture-series-part-2-platform-basics.aspx
  3. What’s Under the Hood – http://blogs.msdn.com/rdonovan/archive/2008/10/30/cs2007-software-architecture-series-part-3-what-s-under-the-hood.aspx
  4. Key Subsystem Overview – http://blogs.msdn.com/rdonovan/archive/2008/11/01/cs2007-software-architecture-series-part-4-key-subsystem-overview.aspx
  5. IT Professional & Business User Component Overview – http://blogs.msdn.com/rdonovan/archive/2008/12/14/cs2007-software-architecture-series-part-5-it-professional-business-user-component-overview.aspx
  6. Application Design Considerations – http://blogs.msdn.com/rdonovan/archive/2008/12/17/cs2007-software-architecture-series-part-6-application-design-considerations.aspx
  7. System Usage Considerations – http://blogs.msdn.com/rdonovan/archive/2008/12/17/cs2007-software-architecture-series-part-7-system-usage-considerations.aspx
  8. Web Site Architecture Considerations – http://blogs.msdn.com/rdonovan/archive/2008/12/18/cs2007-software-architecture-series-part-8-application-design-considerations.aspx

And, for your convenience, the entire series is tagged at http://blogs.msdn.com/rdonovan/archive/tags/CS2007+Software+Architecture/default.aspx.

For a shorter URL, try http://tinyurl.com/3mnebm.

Hope this helps!

CS2007 Software Architecture Series Part 8: Web Site Architecture Considerations

32-bit or 64-bit

The first thing to think about when building the end application is should it be a 32-bit or 64-bit application. Obviously, 64-bit is preferred – but there are many driving factors that will force 32-bit support to be utilized (even if running on an x64 operating system).

In short, the reason 32-bit mode would be forced is because of a piece of 32-bit legacy code running somewhere in the application. Common culprits would be older/3rd party applications such as external content management systems or search tools. Within the realm of Commerce Server, pipeline components will typically force 32-bit mode as many ISVs have not updated their components for x64.

If 32-bit mode is utilized, up to 3GB can be utilized per worker process. Memory consumption should be planned accordingly – along with separate worker processes to host multiple sites or applications if more than 3GB in aggregate is needed.

2-tier or 3-tier

When building a Commerce Server, one of the most oft-debated subjects is whether to use a 2-tier or 3-tier architecture. However, many customers desire to utilize a 3-tier architecture. The principal factors driving 3-tier are:

·         Ability to physically separate the database tier

·         Better abstraction from a code perspective

·         Ability to support other business channels or external punch-outs (e.g. – A site running on another platform needs to use Commerce Server as a back-end.)

The practical reality is that a 3-tier architecture will almost always be slower from a performance capacity than a 2-tier architecture. If performance and hardware footprint are a paramount concern – then it should not be considered. If it is needed from a business requirements perspective – then it is a requirement. If it is indeed needed and performance is a concern, one might want to consider having both 2-tier and 3-tier code running on top of the same Commerce Server instance so as to only incur the 3-tier performance penalties for the scenarios where they are explicitly needed.

Building a 2-tier Site

In this section, the specific design considerations that need the most thought and analysis of architecting a 2-tier application with Commerce Server shall be identified. Given that a 2-tier application written utilizing Commerce Server is fundamentally an ASP.NET application at its core; all standard best practices of architecting an ASP.NET application apply. Given that this is a well known and understood problem space, ASP.NET best practices shall not be regurgitated here. Instead, refer to http://www.asp.net/ or http://msdn.microsoft.com/en-us/asp.net/default.aspx.

Caching

When architecting a 2-tier ASP.NET application using Commerce Server, by far the single most important architectural consideration is caching. This alone will have more of an impact upon performance than almost any other single consideration.

There are several types of caching applicable to a Commerce Server application built in ASP.NET, specifically:

·         Profile Cache – This is an intrinsic cache of data within Commerce Server’s Profile system.

·         Catalog Cache – This is an intrinsic cache of data with Commerce Server’s Catalog system.

·         Pipeline Pool – This is an in memory pool of pre-instantiated Commerce Server Order Pipelines, aimed at reducing the call time to the Pipeline.

·         ASP.NET Caches – At a developer level, both data and output can be cached utilizing the standard means within ASP.NET.

With that understanding in hand, one can then apply all of these throughout the construction of an ASP.NET application based upon Commerce Server 2007. In an attempt to simplify caching philosophy to something that can be easily understood and followed, the guiding principles are:

·         Always use the Pipeline Pool – it will greatly reduce call time on pages calling Pipelines.

·         The Profile Cache is always on – understand how it works and plan accordingly as it cannot be disabled.

·         Always cache the Commerce Server context objects in ASP.NET – this will greatly reduce the time required to instantiate and call the Commerce Server APIs

·         Balance Catalog Caching with ASP.NET output caching – otherwise the same data could end up being cached in multiple locations and wasting memory

To use the Pipeline Pool, all components must be free or neutral threaded – or written in .NET. If any component does not meet this criteria, it should be upgraded or replaced if at all possible as it will negate the ability to pool pipelines in entirety. Because no two pipeline configurations are the same, memory consumption should be profiled utilizing Performance Monitor and factored accordingly into the application’s memory requirements. The pool size should be adjusted based upon the likely number of pipelines that are likely to be active at one time – so as not to overtly waste memory. This can be determined by building a projected usage profile of the site and/or examining Web log files – simply look at the number of concurrent calls to pages or paths utilizing pipelines and set the pool size accordingly. Note: The pool size is a global setting so factor it across all pipelines in use in aggregate across the site. 

The Profile Cache caches profile data in memory – and then does lazy committed writes to SQL Server as data is updated. It can potentially cause data corruption unless strict session affinity is maintained, as a user can get bounced from one server to another in a Web farm and receive inaccurate data since it might not have been written to disk.

As session affinity can be complex to establish and maintain, a better practice is to force-write profile data to the database every time it is updated. But this should be done sparingly – as it is an expensive operation (e.g. – don’t do it every time Profiles are touched – just upon updates). Then, there are no requirements for considerations around session affinity. The Profile cache size should be monitored in Performance Monitor to factor memory consumption requirements and then set accordingly as a configurable parameter. 

The rest of the cacheable data on the site revolves around the Catalog. The absolute best practice here is to output cache at a page fragment/user control level. Output caching in general will provide the most superior performance for caching catalog data. However, it needs to be done at a page fragment/user control level versus a page level – or other things like Profile authentication might also get cached and cause the wrong behavior to occur.

With respect to output caching, one must consider that the site’s memory consumption will grow proportionate to the size of the catalog and how much of it will actually be browsed. This is the significantly complicating factor in leveraging a strategy of output caching and represents the “price” that comes with the performance – as there is no easy or simple way to model and factor the usage considerations. This will need to be planned based upon a calculated usage profile or analysis of actual Web logs. The affects of cache flushing, cache warming, or insufficient memory to cache everything must be factored based upon the projected usage. Remediation steps may need to be taken (e.g. – scripted cache warm-up when resetting the entire Web farm, rolling updates, cache resets when staging data, etc.) to ensure proper behavior.

Alternatively, with output caching, one must set expirations accordingly to ensure data is appropriately fresh. This is why utilizing fragments is important – as data that must be fresher by necessity (such as Inventory) can have a lower expiration value versus more general catalog data, which can generally suffice with a higher expiration value.

Page Design

Writing ASP.NET code for extensibility and best practices – versus performance – are two different arts. A simple site that is lean and mean will perform best, but offer the worst extensibility and maintainability. And vice versa.

In general, if performance is of principal concern – outside of the caching practices described in the preceding section, consider:

·         Do not use session state or view state

·         Avoid utilizing the ASP.NET intrinsic objects such as the form controls

·         Leverage JavaScript on the client whenever possible versus requiring logic at the server

·         Generally avoid many design patterns (such as UIP or Web Client Software Factories) and code strictly to the actual functional requirements

International Considerations

Building an international-ready Commerce Server application is a mix of interesting considerations, largely because Commerce Server is not fully internationalized itself. A summary of the core product’s language capabilities is as follows:

·         Profile: Single language, but any locale supported by the underlying data store is supported.

·         Marketing: Single language, but any locale supported by SQL Server is supported.

·         Catalog: Multi-lingual (e.g. – you can have French and English versions of the same attribute in the same record/row in the Catalog) in any locale supported by SQL Server.

·         Inventory:  Multi-lingual, as it relies upon the Catalog system.

·         Orders: Single language, but any locale supported by SQL Server is supported.

The Catalog, as noted, is single currency by default. Separate fields will be needed to support multiple currencies, or a lookup table can be utilized to convert across exchange rates.

Building an International ASP.NET-based site using Commerce Server, with the aforementioned understanding, can be done by observing the following considerations:

·         When building the ASP.NET site, utilize resource files, thereby separating page content from page design from page code.

·         Make sure that all labels that must surround the Profile and Order data are stored in resources at the ASP.NET level – this way only the form data is going back and forth to the database and no presentation tier elements are affected.

·         Configure the Catalog as multi-lingual and retrieve data based upon the desired locale – matching it to the ASP.NET site locale.

·         If multiple currencies are required, the best practice is to have one currency field per required currency and set explicit pricing. This way the variations caused by managing exchange rate tables can be overcome (not that many businesses utilize this for international pricing regardless). Then pull the currency based upon the ASP.NET site’s locale.

·         For Marketing, the situation is somewhat more challenging because it does not support multi-lingual data – and there are many display attributes such as showing discounts in the basket, advertisements containing text content, etc. The best way to handle the situation is to set custom targeting criteria of locale, pass that value into custom targeting context from the site, and then pull marketing campaigns based upon locale as custom targeting criteria. This unfortunately requires that campaigns be duplicated in every locale supported by the site, but it does solve the problem gracefully from a display standpoint. Alternatively, one must work around the fact that only single language data can be stored and use placeholder data in the database, then resolve that to an appropriately localized resource string at runtime (which would require a fair bit of code and be semi-awkward to implement, arguably perhaps not more awkward than the first workaround proposed here).

Security Considerations

There is nothing special about securing a Commerce Server application written in ASP.NET. It is essentially just another Web site. The critical considerations are ensuring that there are no injection attacks or other forms of malicious behavior possible through manipulation of the Web page. So, such things must be taken into account at the page design level as validating:

·         Query strings

·         Form posts

·         HTTP Referrers (e.g. – make sure that only the correct preceding page (if in a sequence of pages) can call the current page)

·         Cookies (to ensure that they cannot be easily forged)

Pages in general – including custom error pages or other exception handling sequences – should be made to be as non-CPU intensive as possible to handle a denial-of-service type attack of bulk HTTP requests.

As there is nothing specific about Commerce Server here, the general ASP.NET security best practices apply very well.

Reporting Considerations

If the Commerce Server Analytics system is to be utilized, several points of consideration must be thought about in the site design process. By default, Commerce Server Analytics pulls data from two sources – the Commerce Server databases themselves as well as the IIS Weblogs. SQL Server DTS tasks are utilized for importation of both sets of data.

If no schema customizations have been made outside of what is possible within the Commerce Server tools, then the default DTS tasks for database-based data can be utilized. If yes, then additional DTS tasks must be created and the data warehouse schema extended to facilitate importation of the customized data.

With respect to Weblogs, they are relatively easy to import – but they tell only a small portion of the story. To properly tell how a user is navigating a site, additional data points must be collected based upon specific user actions and so forth. Commerce Server’s Analytics system facilitates this through the use of Custom Events. These can be dropped into any ASP.NET page programmatically – and they will capture/log additional data as defined/collected for use in the Analytics system. These do, however, slow down page processing – and should be used with requisite care so as to not adversely affect site performance.

From a reporting perspective, no data in custom events or in custom DTS tasks will be truly integrated into the out of box reports. So, to fully leverage the newly collected data – report customizations must be made and should be factored into the overall project accordingly.

Building a 3-tier Site

Commerce Server was not designed to be a 3-tier application – at least not at the time the product was originally created in 1998-1999 (for Commerce Server 2000). However, in today’s world – 3-tier applications are inescapable.

Commerce Server 2002 Feature Pack 1 and Commerce Server 2007 debut ASP.NET (ASMX)-based Web Services. These are intended for data management only – and are utilized internally by the BizTalk Adapters and Business User tools. They are fully documented and would be appropriate for use as-is or customized for back-end data update scenarios, such as connecting with a SOA-broker different than BizTalk.

They were not, however, designed for building a runtime site. In fact, there will be many API gaps present – as only the data management aspects of the overall Commerce Server API capabilities are represented. And those that are represented are optimized for data management, and not runtime scenarios.

Building a 3-tier Web site to enable such scenarios as the following:

·         Communication from other systems in a punch-out scenario, such as Site A allowing checkouts to be processed by Site B

·         Enablement of other devices/channels, such as handheld, mobility, kiosks, etc.

To build a 3-tier front-end Web site, the following considerations should be examined:

·         Choice of technology – the obvious choices would be ASP.NET Web Services (ASMX) or Windows Communications Foundation (WCF). Given future technology direction, a WCF-based solution will likely have far better longevity (in terms of future technology evolution) than a solution based on ASMX

·         Commerce Server – All of the standard considerations from building a 2-tier ASP.NET Web site also apply equally to building a 3-tier Web site, regardless of whether or not the technology choice is ASMX or WCF. 

·         Service Design – This is probably the most critical piece. Web service calls are inherently quite expensive. Hence, they should be consolidated as best as possible. “Chunky” calls will always perform better than “chatty” calls. Unfortunately, Commerce Server’s API (and that of many other database-driven applications) assumes lots of small calls versus fewer larger ones. Thus, the onus becomes shifted to the developer to – within his or her own Web service design – encapsulate the Commerce Server calls to behave well in a batch mode accordingly. This will likely require a substantial amount of application level data caching at the server (and possibly session affinity) – and should be planned for accordingly in terms of both physical deployment as well as memory utilization.

Given that there is nothing Commerce Server-specific about building Web services utilizing Windows Communication Foundation, general best practices apply. For more information, please refer to http://msdn.microsoft.com/en-us/library/ms735119.aspx, which will be a good set of resources for building WCF Web Services in general – including those utilizing Commerce Server.

CS2007 Software Architecture Series Part 7: System Usage Considerations

Figuring out how to correctly utilize the various Commerce Server systems is mostly a matter (in most cases) of proper schema design coupled with understanding the breaking points of the various components so as not to exceed them. And this, of course, is based upon a solid understanding of the end state business requirements – as already discussed.

The other considerations and typical “gotchas” of system architecture with Commerce Server shall be enumerated throughout the balance of this section. From a logical perspective, it probably makes the most sense to attack system design in the following order:

·         Profiles

·         Catalog & Inventory

·         Orders

·         Marketing

·         Analytics

By starting with the customer, then the goods being sold to the customer, then the orders to process the goods, then the discounts overlaid on top of the orders, and finally the reports against the entire experience – one can minimize changes as a result of dependencies between systems unearthed in the architecture process.

Profiles

The major consideration with the Profile system is where does the data live? Given the flexibility- it can live in a variety of different places. Some considerations:

·         If the number of users is the principal concern, SQL is the best place to store the data – successful production deployments have gone to 60M+ on very moderate hardware.

o    The Partitioning feature adds a lot of complexity and should be carefully weighed as to the trouble factor versus simply running on a single SQL database; keep in mind that no customer has yet come close to needing to go beyond a single database deployment given the scalability of SQL Server on hardware available today.

·         Conversely, Active Directory can only store a small subset of users compared to SQL – ~10M is the practical limit.

·         OLE/DB and ODBC sources – although supported, were not expressly tested during the development cycle of Commerce Server 2007 given the wide variety of combinations and potential sources possible; utilizing these capabilities can be potentially putting oneself into unexplored territory .

·         Having disparate sources – although convenient – is a major performance drain. The Profile system will have to query multiple data sources and create a union of the results versus being able to execute a single query.

Because there is no “canned” schema – it is up to the user to completely define the system in whatever the systems of choice are for storing data. All standard best practices of designing relational database schema (or Active Directory, as the case may be) apply. Some particular considerations relevant to Commerce Server profiles include:

·         Because this is a from-scratch defined schema, if utilizing SQL – make sure it has indexes (and in particular clustered indexes) that are relevant to the usage patterns of the data being stored.

·         Design the table structure with respect to how it will be queried and updated – pulling single tables will always be cheaper than pulling multiple tables with join operations.

·         Be sure to remember where/how attributes utilized in personalization for the Marketing system will be stored as these are often the most queried in a production system.

There are no specific limitations with respect to data sizing; however the product was not tested above 60M profiles (mix of anonymous and registered). That being said – going above this should not be a major concern point.

The other major aspect of Profile system design is compliance with Payment Card Industry (PCI) standards. Storing any credit card data has major implications on policies from the credit card merchants themselves. If at all possible it is best not to store credit cards – as that alleviates many aspects of PCI compliance and the resultant independent audits. If it is required, data encryption must be utilized, which is supported by the Profile system. This will, however, affect performance for queries/updates accordingly.

Catalog

The Catalog system is designed as such that one rarely has to touch the underlying database directly. Therefore, the usage considerations are far narrower in scope, but often times more subtle than straightforward SQL design.

Some of the particular schema design considerations include:

·         Commerce Server stores a lot of metadata; SQL Server has a row size limitation of 8K. It is very easy to exceed this limitation when utilizing character-based data types. Using text-based data types (as opposed to character-based data types) will eliminate this problem, but make querying and accessing data slightly more complicated. It is better to go with text from the beginning however – as changes later could be potentially expensive and complex to affect against an existing system. 

·         Any field that needs to be free-text searchable will need to have its indexes continuously updated by SQL Server; the potential for stale data or a significant amount of processing overhead for index rebuilding is very high. In general, it is better to be judicious and find other ways to search data (if possible) than leveraging free-text searching given these factors.

·         The Catalog is not truly multi-currency; plan on storing separate fields for each currency being supported or plan on leveraging an exchange-rate translation table. There is no in-between.

·         Adding language support is easy from a schema perspective – so this can be easily added at any time.

·         Obviously, simpler is better – keeping the structure as flat as possible will make for faster querying.

·         Storing binaries (e.g. – images or other multimedia files) was not a design consideration of the Catalog system; hence this should probably be avoided and links to the file-system or other content management systems utilized.

There is no practical limit to the number of base catalogs. The product has been tested to the levels published in the performance guide, which is available at http://www.microsoft.com/downloads/details.aspx?FamilyID=E79691F0-BE0F-40A6-940C-5D3A679C5526&displaylang=en.

Going beyond this should generally not present a problem but should be tested accordingly. The principal issue that usually arises is the re-indexing time for free text searchable properties – as noted above. Therefore the fewer of those present in the schema, the better.

The Catalog Sets feature has no specific limitations on the number of catalog sets. However, fewer is better – as the list of catalogs and catalog sets will need to be enumerated on ever single request involving catalog sets to match users to particular catalogs (and there is no way possible to cache this data). Keeping this to a small number will result in far better performance. When creating target expressions for Catalog Sets, it is best to structure the expressions such that a single table within the Profile system can be queried and join operations can be avoided to ensure fastest processing. (This may, in turn, affect Profile schema design as well.)

Virtual Catalogs provide an immense degree of flexibility – and an immense potential for complications given the flexibility. The same data sizing limitations apply to catalogs. The tested limit of number of Virtual Catalogs is 10,000; going above this is possible but requires careful testing. The other big consideration of utilizing Virtual Catalogs is materialization – this will represent in the best runtime performance but requires time to rebuild from the base catalog (which takes overhead) and stale data could be presented while this is occurring.

Orders

The principal design considerations of the Order system are the schema and the Pipeline design. With respect to the schema:

·         Baskets are stored in binary format.

·         Orders are stored in a mix of binary format as well as normal SQL database tables; what goes where is determined by the Order Mapping XML.

·         Storing binaries in baskets and orders themselves should be avoided; this was never tested and will degrade performance.

·         ANY field that will be utilized after an order is captured should be stored in SQL and not in the binary field

·         The data schema for SQL storage should be as simple and flat as possible – to minimize the effort required to persist an order to disk

·         As the SQL schema is custom-designed by the developer, indexes must be implemented appropriately:

o    Do not create clustered indexes based on fields that will slow down order storage; in fact clustered indexes may not even be appropriate at all

o    Implement indexes based on the fields that will be utilized for query and analysis post-capture

With respect to Pipeline design, there are several principles that need to be considered:

·         Try and keep the Basket pipeline as light as possible; this is typically run many more times than the Order calculation and capture pipelines – so having less work to do will greatly improve site throughput .

·         The Order pipelines are transactional – however there may be steps in there that cannot enlist in a DTC transaction (such as Web service calls for merchant services authorization); in this case one must think about how to handle failure scenarios and accommodate manually through code.

·         Pipelines are still COM components; therefore one must use COM+ transactions appropriately and ensure that the threading model supports free or neutral threading (apartment threading will not work) for everything to work properly (and allow pipeline pooling, which will greatly reduce instantiation time on the actual site).

·         Long-running operations may best be handled outside of the pipeline (e.g. – if calls to credit card authorization providers are egregiously expensive) – however this then requires separate handling outside of the order capture process and may also impact Payment Card Industry (PCI) compliance.

And with the last comment – there exists a great segue to the topic of Order capture and PCI compliance. The considerations for this can be best summarized as:

·         One is best off by capturing an order and NOT storing any credit card data – just the authorization number. For returns, one would have to re-input the same or a different credit card (which conceptually works just as it does in most physical store environments).

·         If credit card data must be stored (such as in the scenario if it is processed offline), it must be encrypted. Unfortunately, the Order system does not support encryption as an intrinsic capability. This will require a custom pipeline component to be written to encrypt orders – which will result in a fair bit of work for initial development and possible performance degradation during the capture process (plus associated post-capture maintenance as well). This tradeoff should be weighed carefully.

Marketing

The Marketing system is a relative black box compared to the rest of the Commerce Server systems, as it is more a matter of configuration than design. That being said, each aspect of the system has its own set of unique considerations to ensure successful use from an architecture perspective – especially with regards to dependencies in other systems and upon runtime performance.

From an overall perspective, performance is the most notable consideration. The system was designed to work with between 1,000-2,000 items active at one time. Beyond that and performance will end up degrading. Thinking about scenarios such as one discount per item in the Catalog will prove to be impractical in production; instead consider customized pricing and Virtual Catalogs – as an example.

The other major performance consideration is with respect to the usage of Target Expressions. More Target Expressions (especially if compounded together) will equal worse performance. Likewise, Target Expressions that can query against a single table in the Profile system will perform best; those that require join operations internally will perform considerably worse.

It is important to keep in mind that there is no caching of the Marketing system. Expressions are evaluated upon each and every request. Hence, they should be called judiciously. In general, the discounting functionality will not represent a problem – as this is typically only called in basket and checkout operations. Although by utilizing the Runtime Discount Filtering capability it is possible to use it on the end site – this will significantly degrade performance. With respect to Advertisements, generic advertisements that utilize only impression tracking capabilities will be more practical (for cross-selling and up-selling) than using Target Expressions to target content to individual users given the overhead of evaluating the expressions. Direct Mail, because it works offline, generally will not impact end site runtime performance.

The intrinsic e-mail handling capabilities (or lack thereof) of the Direct Mail system are the last major consideration; its capabilities are very much aimed at providing basic send functionality with minimal tracking and error handling. This has proven to be a challenge point for many customers. Instead, a better alternative might be to use the List Manager feature of Direct Mail (which utilizes the Target Expressions of the Marketing system to generate the recipient list) to create lists of intended recipients and then export them for use in another mailing engine.

CS2007 Software Architecture Series Part 6: Application Design Considerations

With a solid understanding of exactly what Commerce Server is – and more importantly is not – one can then come up with a somewhat logical methodology to build production grade application architectures based upon the product. This section will attempt to outline the most expeditious thought process possible to get from here to there.

The most important consideration, however, comes before the technology – and that is business requirements. No two businesses run alike. In most cases, businesses will not adapt their business processes to work the way the product most naturally does. To get around this, one should generally not view Commerce Server as an out-of-box solution, but rather a toolset that can be utilized to build a solution around business requirements.

Hence, having an exceptionally solid understanding of the business requirements is the most important first step. As this book is about Commerce Server and not requirements analysis, that will not be discussed here.

The principal considerations that need thinking – beyond intrinsic idiosyncracies within Commerce Server (which will be covered in a separate post) – include:

  • Tenancy
  • Single versus Multi-Environment
  • Interactions by Business Users
  • Interactions with Back-End Systems

 

Tenancy
The first consideration in a Commerce Server deployment is the expected business deployment topology – specifically how many sites and how should data be shared amongst them. In the simplest scenario, one will have a single site (e.g. – http://www.companyx.com) and a single set of Catalog, Inventory, Orders, Profiles, Marketing, and optionally Analytics.

Rarely is life that simple. Let’s consider a few common situations:
•    Multiple Brand Subsidiaries (e.g. – http://www.companyx-1.com, http://www.companyx-.com and so forth – all belonging to the same parent)
•    International Sites (e.g. – http://www.companyx.ca, http://www.companx.com, http://www.companyx.co.uk and so forth – all providing the same site for different locales)
•    Mix of both!

The business requirements around this have significant impact on how Commerce Server 2007 is deployed – and which edition of the product can be utilized. The principal deciding factor is whether Catalog, Inventory, Orders, and Marketing data should be shared between end-sites – or not.

If Catalog, Orders, and Marketing data can or must be shared amongst end-site instances, an application architecture of the following must be implemented as follows… A single Commerce Server Site must be provisioned, with single instances of Site Resources such as Catalog, etc. Each end-site (e.g. – http://www.companyx-1.com, http://www.companyx-1.ca, http://www.companyx-2.com, http://www.companyx-2.ca, and so forth) must be configured as a Commerce Server Application within the site. This configuration requires Commerce Server 2007 Enterprise Edition, as the limit of one Application per Site in Standard Edition has been exceeded.

In another scenario, Catalog, Inventory, Orders, and Marketing data should not be shared between sites. But perhaps it is still desirable to leverage the same physical infrastructure. In this case, an application architecture must be implemented as follows… Each end-site must be configured as a separate instance of a Commerce Serve Site, with one set of Site Resources and Application per Site. In this scenario, up to 10 instances like what is described here can be implemented on a single physical server running Commerce Server 2007 Standard Edition; if more than 10 is required – Enterprise Edition must be utilized.

Although other permutations exist, these represent the two principal Commerce Server 2007 tenancy scenarios that can be accommodated by the product. Commerce Server itself is not truly multi-tenant (at least in this release) – so it cannot differentiate between tenants within a single Site or Application instance.

In EITHER scenario, the Profiles and Analytics resources can either be shared amongst everything or affinity established with one of the deployments – as they are global resources. Likewise, in either scenario – single or multiple physical deployments can be utilized – but databases will be shared as noted based on the chosen application deployment scenario. Besides the obvious sharing (or not) of customer data, this has impact on which business users can change and access what data as well.

 

Single versus Multi-Environment
The next consideration is how many environments are needed. If a single physical Web farm can be utilized, then Commerce Server 2007 Standard Edition can be utilized. However, this means that all business data changes for pricing and such will be made LIVE to the production environment. This has several potentially negative ramifications, specifically:

•    Data Inconsistency – Any data entry mistake or flakiness caused by changes as data is being updated will be immediately represented to the customer – and potentially reflect negatively.
•    Performance – Making changes to the live site will indeed negatively impact runtime performance. It’s as simple as that.
•    Security – This will require that the Web Services are exposed to the Internet on the production Web farm – which could potentially represent a point of vulnerability for a hacker.

If any of these factors are deal breakers, then Enterprise Edition must be utilized and business data changes made in a physical environment separate to production.

 

Interactions by Business Users
Commerce Server 2007 was designed for all business user maintenance to go through the Windows-based business user tools, specifically Catalog Manager, Marketing Manager, and Customer and Order Manager. (Or at least like approximation of those tools making similar calls to the Commerce Server Web services.)

From an architecture perspective, the first consideration is whether or not these tools should be customized. If yes, one must consider ongoing maintenance – as these tools are being revised at least as often as every Service Pack of Commerce Server. Microsoft will release DIFFs of source code between versions with Service Packs to facilitate making changes, but this must be planned and accounted for in the event that customizations are indeed made.

From a utilization perspective, these tools can tend to be somewhat processor intensive and/or destructive (given that they are modifying the underlying data upon which Commerce Server operates). They are not ever meant to be utilized on a live site. If they are to be utilized on a live site, the following considerations must be adhered to:

•    Perform maintenance in off-peak hours only – as some degree of performance degradation can generally not be avoided
•    Utilize caching as such that the site changes will not be noticed – and then refresh the cache utilizing some other means after changes have been made
The best practice, of course, is to perform changes in a separate environment and then stage them to production utilizing Commerce Server Staging, the BizTalk Adapters, or some other convenient means.

 

Interactions with Back-End Systems

When running an operational e-commerce system, there will generally be two streams of data inbound and outbound from the system. On the inbound side will be business user changes – with updates to Catalog, Inventory, Marketing, and Orders (for order status). On the outbound side will be Profiles (for customer data), Inventory (to reflect quantities purchased on the site), and Orders (to send off for fulfillment).

Commerce Server 2007 provides the BizTalk Adapters, which leverage the data management Web services, as its out-of-box mechanism for affecting line of business data updates of this nature. The data interchanges will generally have a BizTalk hub attached to Commerce Server, with at least one (if not more) Send and Receive Adapters flowing in and out of the Order, Profile, Catalog, and Inventory systems of Commerce Server.

BizTalk in general represents a good integration strategy, as data interchange can be configured within the BizTalk toolset with minimal coding to any system that can already interface with BizTalk. At the time of writing, support for BizTalk connectivity (and ergo Commerce Server connectivity via BizTalk) from Microsoft can be obtained from http://www.microsoft.com/biztalk/en/us/adapters.aspx.

Alternatively, any other SOA-broker can connect to the Commerce Server Web Services to perform the same functions as the BizTalk adapters – there are no special hidden capabilities.

Some of the best practices to consider when utilizing the BizTalk adapters (or any other integration broker) are:
•    Operate on a reasonable polling schedule for pulling data – this is generally business requirement driven but there is an artistic balance to be achieved between not enough and too much. Once per hour typically meets the needs of most customers and is generally unobtrusive. During extreme peak times, it may be best avoided and handled during off-peak times. Business rules typically need to be adjusted in this case, such as additional inventory stock-out thresholds on hot items, etc.
•    For areas where SQL indexes can be customized, tweak the SQL indexes to optimize for the queries generated by the polling operations. If one is unsure what the polling operations are doing, this can be easily discovered by utilizing SQL Profiler and indexes then adjusted accordingly. (Be careful though – especially with clustered indexes – to not inversely create a bottleneck for updating data or the runtime site.)
•    For updating data, try to keep the operations as infrequent as possible. Additionally, one must be respectful of indexes that already exist – and the impact that updating can have (especially with clustered indexes). Separately, there may be additional operations required – such as refreshing site cache – these may need to be triggered also at the time of updating.