performance

Case Study : How bad code can hurt performance and even break

Many websites, notably eCommerce and Wordpress, live with bad code. They end up spending more effort in improving performance by adding more resources. It is like adding more horsepower to an inefficient car engine – will consume more gas and in the long run fixing the engine would be a better thing.

Not all such attempts will succeed, however. Bad code takes many forms. There are times when your application is dependent on an external service. The problem comes when this service is executed inline when your visitor is waiting for your server to respond. The two popular dependencies are sending email (smtp) and curl calls for external http access. (curl is a popular library to access remote servers programmatically).

In this article we will analyse a bad code example we found in production. A customer in India (we manage the Magento site hosting) raised a ticket that their international shipping costs stopped working. Since we take care of their release process, we investigated to see if this was an inadvertent code change that made it to production. We could find no change. Infact, there was no release made from the day the problem started. So, with permission from the customer, we investigated further.

The bad code

They had overridden the default table shipping with custom logic. The primary reason was that they wanted to give shipping costs in USD for US customers. Magento has concepts of base currency and display currency. All calculations are done in base currency (in this case INR) while the charge was in display currency (USD).

 ...
    $to_Currency = urlencode($to_Currency);
    $url = "http://www.google.com/finance/converter?a=$amount&from=$from_Currency&to=$to_Currency";
    $ch = curl_init();
    $timeout = 0;
    curl_setopt ($ch, CURLOPT_URL, $url);
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
 ...

I wondered why it took over 4 years since the bad code was written to break. The reason it finally broke was that google took the URL out of service. The code below would return 0 as it did not find the appropriate tag in the output. So, all shipping was free.

Analyziing the bad code

  • Instead of using Magento’s functions for currency conversion, the developer thought of
    • Accessing the google finance page
    • Scraping the HTML output
  • Might be in violation of Google’s terms of service (programmatic access)
  • The customer has a one step checkout plugin – this code gets called each step, when anything changes. This is slowing down the checkout page.
  • There is no retry or reporting of failure
  • The conversion rate used for shipping may be different for the products, as the site conversion rates are updated daily.

Conclusion

Using curl in code that is accessed as the visitor waits is a bad idea. Apart from slowing down user interaction, the probability of an error increases and then the error has to be handled appropriately.

A non technical guide to scaling Magento (or any other website)

Performance & Scaling of a Magento web site are often confused. As a store owner who may not be technical a close analogy with real life will help in talking to your hosting providers and other experts.

It is no coincidence that hits to a website is called as traffic. We take this analogy further, to explain what factors matter to performance and scaling a website.

Website performance is like a car – higher performance cars drive faster and can cover a distance in a shorter time. Similarly, a higher performant website will serve a page fast. This is often measured as page load speed. A critical component of page load that the server is responsible for, is server response time. Like measuring performance of a car, measuring the page load speed is done in test mode with little or no traffic. Sometimes the performance is measured at a random time without looking at other traffic to the site. That is like test driving a car through traffic.

Scaling is like building highways and roads for the cars to move on. Highways are resources – CPU, memory, network that the hits to a website will utilize. The task of a Magento scaling expert is to architect a system – servers and sizes, services to run on each server, connectivity of the servers and access from internet, etc.

Hits to a website is like traffic of random cars on the highway. Each vehicle seems to have a mind of its own, joining and leaving the highway. Each visitor to the website will take their own journey visiting different pages.

Some Observations

Observation 1 : Like a car cannot drive at its highest speed possible at all times due to traffic, a website too cannot perform at its best best all the time. Understanding the factors that make the website perform at its optimal level all the time would be the task of both the developer and the server architect.

Observation  2: Like in traffic we have vehicles of different performance, in a website all URLs do not perform equally. A category page may not perform the same way as product detail page for example.

Observation 3: Better throughput will be achieved with the same resources, if the vehicle performance is improved – some bottlenecks can be avoided if the vehicles moved faster. Similarly, a better performing website is likely to scale better.

Observation  4: Like in traffic in order to scale one has to find the bottleneck in the highway that is causing the current slowdown, fix it and then look for the next bottleneck. This is a change in the hosting infrastructure and architecture, different from the website performance.

Observation  5: A traffic designers job is to ensure maximum number of vehicles can pass the highway at the best speed for each vehicle. A hosting designers job is to ensure maximum traffic is handled in a way that each hit is best served.

What lessons can we learn from traffic management

Lesson 1: To better manage traffic highway system has to be designed that is scalable. Mostly by bottleneck analysis we can derive what needs to be done. For example, is database a bottleneck, is file system access a bottleneck, etc.

Lesson 2: When traffic increases, possibly beyond the capacity of the highway, traffic management has to account for one more variable – starvation. The amount of time a vehicle has to wait at a metered light to enter the highway. The longer the wait, more frustration from the drivers who will find a better route to their destination.

Lesson : On a highway lanes are drawn. A better hosting will make lanes. The way most hosting providers take traffic is analogous to not having lanes with the hope that the maximun throughput will be achieved by letting hits contend for resources. The operating system stands to decide what process gets to use resources.

What are the recommended steps to achieve scaling?

As a first step to server side scaling, we move the database layer out to another instance or server. The main reason is that it is better to allocate resources in a single server when the workload is similar.

In our multi part series we take you through achieving scaling. The series is aimed at a store owner who need not be technical but is ultimately responsible to take a decision on the store. Until now you had to depend on an expert. However, there are no clear answers and the expert is making judgement calls based on most likely their prior experience. As a matter of fact no 2 webstores so results of efforts vary. This series will make you better informed.

We start by looking at a popular form of scaling – using FPC or Full Page Cache and other types of caches.

In order to help with scale, another important aspect is code quality specifically related to scaling. Scaling is difficult to achieve reliably if there is any externally dependent blocking service executed as part of the hit. Examples include sending email directly to a recipient or a external service, sending information from the server to an external service. All such processing should be done with some form of a queue handled either by an different process such as a cron job. Until Magento 1.9.2.4, the default email sending was inline for example, slowing the order success page being shown.

Autoscaling adds and removes servers (and hence resources) – something traffic managers cannot do with highways. This gives website scaling an advantage to be more elastic.

Magento Caches – Scaling Magento Part 2

Introduction

Caches are an integral part of a strategy for performance and scaling a Magento website. Managing caches is a core function of infrastructure management. A Cache is the term used to describe a mechanism to store calculated values such as a query or HTML so a subsequent request does not need to recalculate.

While Part 1 looked at FPC specifically, in this article we review all other caches needed for a Magento store. Some of these are commonly found to be referred in best practices, but a deeper understanding will help put them in perspective of Magento.

[Tweet "Performance of a website is like that of cars. Scale is like building a highway and interchanges."]

In general, for any cache the factors to consider are

  • Effectiveness – measured in terms of impact on page speed. Caches should be highly effective.
  • Invalidation flexibility – is an inbuilt automatic mechanism available? Is the mechanism too aggressive? A High score means the mechanism is very flexible.
  • Performance of miss – if a miss were to occur what would be the performance in terms of page speed. A low score represents bad performance.
  • Cost of refill –how much would it cost to refill the cache completely to be back in its most effective state?
  • Hit ratio achievable – In a real world environment if hit ratio were to be measured periodically what can be expected. A high would be over 99% hits over a reasonable period such as a day.
  • Memory required : How much memory is needed on the server side to cache the content.
Cache Type Effectiveness Invaliation Flexibility Peformance of miss cost to refill Hit ratio achievable Memory required
Browser High High Low Moderate High N/A
CDN High Low Moderate Moderate High N/A
FPC High Low Low High Moderate High
HTML Block Cache Moderate High Moderate Moderate High Low
Mysql cache High Low Low High High Moderate

Browser cache

  • Easy to setup – at the hosting level. Static resources like css, js and images should have caching enabled.
  • There is no need for invalidation as the browser checks for each cached resource if a newer version is available.
  • The ability of browsers to find changed resources means cacheable resources should have a very long expiry – say over a year.
  • Browser’s request for detecting change has a performance impact based on number of resources.
  • A key question to be answered is how merging css and js files impact browser caching performance. If merging is enabled, each page type (home, category, product, CMS) are likely to have different css and js files. Not merging allows the browser to cache individual files and reuse them across pages. However, in HTTP/1 browsers limit the number of connections to each domain. So, the advice is to not merge css and js files if using HTTP/2 or splitting domains for skin and js if using HTTP/1.

CDN

  • CDN networks cache at edge servers closer to users – reducing round trip latency from browser to server.
  • CDN also offloads the server from serving static cacheable resources, improving network performance of the server as well as freeing up server CPU to serve dynamic content.
  • CDNs may also take the load of SSL validation however, caution is needed here as the traffic between CDN and server may be unsecure making the site vulnerable to some type of attacks.
  • CDNs are notorious for invalidations – some charge for APIs, others take a few minutes before the invalidations are effective across all edge points. When evaluating a CDN, this is a key factor that is not evaluated.
  • Having many edge locations may not be a good thing - as each edge records the first access to a resource as a miss
  • While a single miss is easily retrieved from the server or backup store, a full invalidation requires multiple GBs to be transferred to make the cache effective again
  • CDNs when full give great performance benefit on page load times
  • Modern CDNs like section.io can also do FPC (html) caching using a distributed varnish cache architecture.

FPC

We have reviewed aspects of FPC in part 1 here.

[Tweet "FPC & Varnish is not the solution to bad Magento php code."]

Recap

  • FPC caches full HTML pages – except for variable content
  • Excellent for caching dynamic content
  • Requires very high amount of RAM on the server side
  • Depending on the quality of code, FPC invalidation or miss can have impact on resource utilization
  • Best implemented with autoscale – so servers are added automatically when cache is invalidated

HTML Block Cache

  • Also caches HTML but cached at the block level. Magento uses HTML blocks for building a page.
  • Since blocks may be shared across pages, these blocks do not have a high impact on invalidation as they will be regenerated once and used multiple times
  • Can dramatically improve performance if used consistently and correctly.
  • Needs developer help as many blocks are not cached by default. To cache a block, one needs a unique key that correctly identifies its variation. Check this technical blog info.
  • Invalidation can be either via a key or time (TTL). If using a key, developer needs to write appropriate event callbacks to detect change.
  • Examples of major speed up include home page blocks where latest products are shown. Depending on frequency of store updated, a 10 minute to 1 hour TTL on the block will result in dramatic improvement of home page speed.

We can analyze your site for free

Schedule a call

Do you know how your website performs and want an expert to look at it?

  • We will analyze your site using public information.
  • We will run a synthetic test from the internet.
  • We will ask you to give us a 1 day web server log file.
  • We will also try to identify what steps if any you should take to improve your sites performance goals.

mysql cache

Mysql can store result queries in memory. The amount of memory is specified in my.cnf as a combination of query_cache_limit (max memory for a single query result) and query_cache_size (max memory all cached queries).

  • mysql automatically invalidates a cached query if any table that was used in the query changes.
  • to access the cache mysql uses a lock on a table thereby reducing the effectiveness of the cache
  • Many mysql articles recommend smaller cache values due to the lock problem but it is best to test the size of the cache for your situation. It is best to monitor "low mem prunes" and "table locks wait". The first gives the number of times a query was cached by removing another and the latter gives the number of times table lock was not immediately available. Both should be as low as possible.

Our experience with mysql cache suggests when front caches are empty, they are useful. The cache has a negative effect on add to cart and checkout.

Mysql caches also have negative side effects when a large catalog is loaded. However, we recommend to keep indexing to be on schedule, disable mysql cache before indexing starts and enable when indexing ends.

There are other caches too

  • Magento has caches other than block. These should always be enabled on a production server.
  • Php opcache
    php code is read and converted to "opcodes" which are then interpreted. An opcode cache stores the opcodes reducing one step each time. Opecode cache should have sufficient RAM and keys (equal to number of php files).
    Using opcache.php the status should be checked regularly.
  • Operating system cache
    linux has an excellent file system cache - whenever additional RAM is available, linux will store opened files in RAM. This is important if for example you do not have a CDN. Files will actually be retrieved most likely from RAM rather than disk.

Scaling Magento Series

Performance & Scaling of a Magento web site are often confused. As a store owner who may not be technical a close analogy with real life will help in talking to your hosting providers and other experts. It is no coincidence that hits to a website is called as traffic!

Performance of a website is like a car – higher performance cars drive faster and can cover a distance in a shorter time. Similarly, a higher performant website will serve a page fast. This is often measured as page load speed. A critical component of page load that the server is responsible for is server response time. Like measuring performance of a car, measuring the page load speed is done in test mode with little or no traffic. Sometimes the performance is measured at a random time without looking at other traffic to the site. That is like test driving a car through traffic.

Core Infrastructure is like an engine – Better CPU with L2/L3 cache, faster memory, better disk performance will improve the engine you use. There are simple commands in linux and ways to find this information. Refer here for CPU performance, memory performance and disk performance.

Good code is like good fuel – Just as bad fuel will hurt the performance of a car, bad code will hurt the performance of a website. Refer here for identifying bad code in Magento.

Page Load Time is like transmission – the mechanism that delivers speed to the page. This is achieved by optimizing in code the above-the-fold content, use of CDN,  use of browser cache, using gzip on all text content, optimizing images, delivering appropriately sized images, minifying css and js, optimizing the use of marketing pixels.

Scaling is like building highways and roads for the cars to move on. Highways are resources – CPU, memory, network that the hits to a website will utilize. The task of a Magento scaling expert is to architect a system – caches, servers and sizes, services to run on each server, connectivity of the servers and access from internet, etc.

Hits to a website is like traffic of random cars on the highway. Each vehicle seems to have a mind of its own, joining and leaving the highway. Each visitor to the website will take their own journey visiting different pages.

Observation : Like a car cannot drive at its highest speed possible at all times due to traffic, a website too cannot perform at its best best all the time. Understanding the factors that make the website perform at its optimal level all the time would be the task of both the developer and the server architect.

Observation  : Like in traffic we have vehicles of different performance, in a website all URLs do not perform equally. A category page may not perform the same way as product detail page for example.

Observation : Better throughput will be achieved with the same resources, if the vehicle performance is improved – some bottlenecks can be avoided if the vehicles moved faster. Similarly, a better performing website is likely to scale better.

Observation  : Like in traffic in order to scale one has to find the bottleneck in the highway that is causing the current slowdown, fix it and then look for the next bottleneck. This is a change in the hosting infrastructure and architecture, different from the website performance.

Observation  : A traffic designers job is to ensure maximum number of vehicles can pass the highway at the best speed for each vehicle. A hosting designers job is to ensure maximum traffic is handled in a way that each hit is best served.

What lessons can we learn from traffic management

Lesson : To better manage traffic highway system has to be designed that is scalable. Mostly by bottleneck analysis we can derive what needs to be done. For example, is database a bottleneck, is file system access a bottleneck, etc.

Lesson : When traffic increases, possibly beyond the capacity of the highway, traffic management has to account for one more variable – starvation. The amount of time a vehicle has to wait at a metered light to enter the highway. The longer the wait, more frustration from the drivers who will find a better route to their destination.

Lesson : On a highway lanes are drawn. A better hosting will make lanes – a thought we think is unique to our style of hosting Magento. The way most hosting providers take traffic is analogous to not having lanes with the hope that the maximum throughput will be achieved by letting hits contend for resources. The operating system stands to decide what process gets to use resources.

What are the recommended steps to achieve scaling?

As a first step to server side scaling, we move the database layer out to another instance or server. The main reason is that it is better to allocate resources in a single server when the workload is similar.

In general these techniques can be used for scaling

Caching. Simply stated do not recompute results that can be reused. The results could be HTML (either a complete page or a part of a page or a page with holes to be filled), or json (returning data to a API call) or sql query results, etc. Cache entries require either an expiry time or an invalidation event. Caches work better when stale content is not a major problem.

Queing. Another powerful technique is putting things to work in a queue vs doing them in real time. A queue would then have a poll for results to update when results are ready or a trigger to update. Magento makes it easy to write trigger events and many are used for 3rd party integrations. Unfortunately events are fired inline – when the visitor to the website waits for the response. It is better to use queing system Another popular event trigger is to email.

Tuning. Monitoring and tuning to improve scale as a continuous process. If you are not monitoring you cannot improve. Monitoring does not mean CPU and RAM. Measuring actual response times, cache hit / miss ratios, queue lengths and alerting and analysing these parameters.

Sharding. If database is the bottleneck, sharding – either vertically or horizontally – can help in reducing the load. This works equally well on mysql as well as nosql databases, but requires code to be reworked. A properly sharded database can cause parts of applications to be split allowing for greater app level scalability.

Laning services. Another powerful technique achievable by stateless APIs, SoA, microservice architecture and other design patterns. This allows easy scaling as busier lanes can be scaled out independently. Along with database sharding, lanes can be made very deep and can then be scaled out physically and geographically.

In our multi part series we take you through achieving scaling. The series is aimed at a store owner who need not be technical but is ultimately responsible to take a decision on the store. Until now you had to depend on an expert. However, there are no clear answers and the expert is making judgement calls based on most likely their prior experience. As a matter of fact no 2 webstores so results of efforts vary. This series will make you better informed.

We start by looking at a popular form of scaling – using FPC or Full Page Cache.

In order to help with scale, another important aspect is code quality specifically related to scaling. Scaling is difficult to achieve reliably if there is any externally dependent blocking service executed as part of the hit. Examples include sending email directly to a recipient or a external service, sending information from the server to an external service. All such processing should be done with some form of a queue handled either by an different process such as a cron job. Until Magento 1.9.2.4, the default email sending was inline for example, slowing the order success page being shown.

Autoscaling adds and removes servers (and hence resources) – something traffic managers cannot do with highways. This gives website scaling an advantage to be more elastic.

Part 1 : Full Page Cache (FPC)

Part 2 : Other Magento Caches

Part 3 : Code quality

Part 4 : Optimizing Checkout Flow

Part 5 : Hardware

Part 6 : Hosting

Part 7 : Auto scaling

Full Page Cache (FPC): Scaling Magento Part 1

Introduction

The most important and often ignored factor to scaling is the quality of code. Well written code will scale better. The next most important factor is perhaps caching. There are many types of caches that developers, managers and store owners need to understand. Full Page Cache (FPC) is seen by store owners as a magic solution to speed issues. Understanding the benefits and compromises of a caching mechanism is important to understand scaling.

FPC Options

Magento Enterprise 1.x and Magento Open Source & Commerve 2.x, both have a FPC module inbuilt.

There are many plugins available for Magento Community 1.x. Some hosting providers will help setup a Varnish based FPC with appropriate hole punching.

Magento 2 has two mechanisms for FPC - php based (called FPC) and varnish. Varnish is the preferred option for production due to the architecture and speed of response.

The discussion below applies to all these mechanisms as well as to Magento 1.x and Magento 2.

What is Full Page Cache (FPC)?

FPC is a cache of a full HTML page – except variable content such as a login status or items in cart or stock status of a product or sometimes even price of a product. When a hit is encountered - i.e. the page required is in the cache, FPC will return very fast compared to when there is a miss i.e. the page is not in the cache, which will require re-generating the content. FPC may store the cache in files, but more likely for maximum benefit, it will be stored in memory.

FPC affects resource utilization – memory and CPU. As with all caches, we trade memory for CPU time.

Traditional FPC stores the page in Magento cache and is a part of Magento. Varnish stores the page HTML after it has been generated and is not a part of Magento. It is a separate process.

What FPC is not!

Let us understand FPC better

  • Memory needed to store the entire site in FPC
    Let us say each page is 100KB and you have 10000 pages to cache. That would take about 1GB of RAM. The problem is when the number of pages or page size starts rising above this, the RAM requirement goes up. So, if you now had 20000 pages (result of each option in layered navigation for example), you would need 2GB or if each page was 120KB the 20000 pages would need  4GB. Pages are not just products – they are category pages as well. If layered navigation is added the pages multiply fast as each combination is unique and needs to be stored independently. If you start exceeding the RAM available, you need to decide what to do when you hit the memory limit.
  • Cache warming.
    Cache warming is the process of automatically adding pages to the cache before a real visitor hit comes to the cached page. When a cache is cleared, you may need to warm the cache to make FPC effective early. Cache warming uses a crawler to artificially visit pages of a site. A typical crawler will recursively crawl the site starting from the home page. This sounds logical but here are some things to think through

    • If possible find the most likely pages you need to be in the cache and warm the cache with only those pages. This will give the maximum benefit.
    • If you cannot fit all the pages in memory, the use of crawling to warm the caches becomes a problem – they will recycle pages out of memory at random, not based on the end user popularity of the pages.
    • When the cache is being warmed your resource requirement in terms of CPU will rise as both the crawler and real traffic are being served.
    • If possible crawl the site in parallel – the earlier the pages get cached the more likely a visitor to a page will already be in the cache (scoring a hit).

Performance degradation on FPC full invalidation

The above figure shows the bad response immediately when a FPC that had built to 1.5GB was cleared completely. The top image is from redis usage graph from munin and the one below is AWS cloudwatch latency (time to serve a page) averaged per minute. The latency came down as AWS Autoscale added more instances, costing money.

  • Invalidating the cache :
    Magento automatically invalidates FPC (internal or varnish) by tagging or hashing the content with keys that refer to the type of content. For example, it may generate a tag / hash CATEGORY_123 if the page depends on category 123. Now, when category 123 changes, Magento sends out a invalidate message that says "all pages that have tag / hash CATEGORY_123 should be invalid". Magento has a elaborate tag convention.
  • FPC and robotic crawlers (BOTS)
    Even if you do not use a crawler for warming, robotic crawlers on the internet (such as google’s indexer Googlebot) will start filling the FPC cache with pages they happen to crawl. It is our advice that a site with FPC should have robots.txt and a front end processor (nginx, WAF) restricting BOTs.
  • CPU and time needed to re-generate a page
    A FPC can fully invalidate (clear) due to a (p)html or css file changing or partially due to a data change such as a product update. A miss from FPC results in the page being regenerated. The CPU requirement for a miss is much higher than a hit. If a crawler is used to warm the cache or if traffic is high, CPU requirement can be quite high as the FPC fills up. Yet, the visitor experience is not good during this period. Using autoscale, this performance degradation can be contained to some extent as additional instances are launched to handle the high CPU requirement.
  • Discipline when using FPC – know when invalidation happens
    It is important to add discipline for code update as it has the worst effect on user experience.

    • Code update should be done at low traffic times.
    • Category changes should be carefully planned at low traffic times.
    • Magento indexing should be set to manual (M1) or on schedule (M2) with a cron running the indexer.

Our recommendation for FPC

  • Do not use a random crawler to warm the FPC cache. Use a page popularity based crawler to warm the cache if necessary.
  • Avoid using a crawler during high traffic – the crawler will compete for system resources with live traffic
  • If possible update code during low traffic times as it causes FPC to invalidate
  • If your site is horizontally scaled, pre-launch instances to your load balancer before invalidating FPC, either explicitly or indirectly, so the latency of starting an instance does not further worsen the user experience

Magento 1.x FPC Plugins

  1. Free Lesti FPC : https://github.com/GordonLesti/Lesti_Fpc. Use this guide to install
  2. Magento connect search results for FPC

Should FPC be a part of scaling strategy?

FPC is concerned with speed. Scale is concerned with the process that helps the site add resources when needed. FPC helps in scalability by reducing the use of resources per hit to the website under certain conditions. It changes the dynamics of when and how many resources will be needed.

FPC has to be considered to be part of scaling strategy - but as one of many parts.

Read part 2 where we discuss other Magento caches.

Read the overview of our Magento scaling series here.

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Transactional Email Deliverability of your Magento Store

The internet started with email and email continues to be a very important means of communication for a Magento site. Emails that are sent directly in relation to an activity on the website such as a registration or purchase – are called transactional emails. Transactional emails occupy a different place in the email marketing category and are governed by less strict rules worldwide.

Importance of Transactional Email for Magento stores

Deliverability of transactional emails is a key to customer satisfaction and loyalty. If a customer requesting a password reset does not get an email in time in the the inbox would result in possibly loosing the customer.

Why is Transaction Email Deliverability a problem?

If email is fundamental to internet why is email deliverability a issue?
In order to protect email infrastructure from spammers, many services created spam lists – IP addresses that have previously been used to spam and are blacklisted. There is no single authority with such lists, leading to the deliverability problem. The IP you get assigned by your cloud provider may not be the clean in all the lists and it is too difficult to find and much more difficult to get cleared. Transactional email providers come to the rescue – their business is to increase deliverbility.

What can be classified by a transactional email?

Newsletters, even opted in, do not classify as transactional email. If you do not send newsletters through Magento, all emails that go out will be transactional.
However, you maybe crossing the line if you send out upsell / crosssell in your email order confirmation for example.

Third party providers

There are many providers and it is a very competitive market a search on google for transactional email will get you many results and comparisons.
Here are a few recently updated comparisons

How to get started with transactional email for Magento?

  • Check if you have an existing subscription to a transactional email service – indirectly. For example, if you are hosted on softlayer, you may get sendgrid credits. If you use Mailchimp to send newsletters, you may have mandrill credits.
  • Signup for the service – most of them have a free tier
  • We think having a Magento plugin is not a requirement if you are self hosted on a VPC or better. Read on, we think using the SMTP service is better option than a plugin or code integration.

Before you install the Magento plugin, read this!

  1. Plugins add a drag to the system – like it or not, each plugin you add, contributes to a slowdown of Magento due to the architecture. Many plugin authors are guilty of passing in additional features into the plugin.
  2. Plugins for transactional emails are “inline” i.e. the email is sent while the purchaser is waiting for a confirmation. That is a dependency on an external system. Occasionally the service may have slowed down and that delay will be added to the wait for the customer.
  3. Local email systems are automatically configured to retry in case of upstream infrastructure failure. If configured at the system level, the email is sent only to the local system, form where it goes into a queue which the systems email service will relay. If for some reason the remote email service is not responding, the queue will remain active and a retry will be attempted after sometime.
  4. Do not select the service based on the availability of a Magento plugin – that is the least important part of the evaluation

How to setup

All providers use TLS for SMTP communication on port 587. It will be required to open port 587 in the firewall to ensure emails be sent out.

Note : Some cloud services notably Google Cloud Platform does not allow communication on ports 25 or 587. For such services you need to use a transactional email service provider that allows SMTP communication over a non standard port.

Use the guide below to get your username and password and then use the steps to setup postfix

For Mandrill

Username : mandrill username
Password : Get Key (Dashboard->Get API Keys->NewAPI Key)
Domain : smtp.mandrillapp.com

For Amazon SES

Username & Password : https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html
Domain (as per region, this is for US West) : [email-smtp.us-west-2.amazonaws.com]:
Domain verification : http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domain-procedure.html

For Sendgrid :

Get this certificate and store in /etc/postfix/ssl
wget https://certs.godaddy.com/repository/gd_bundle-g2-g1.crt
username : sendgrid account username
password : sendgrid account password

postfix setup

    1. Ensure SASL authentication package like cyrus is installed.
    2. Ensure you have a FQDN (Fully Qualified Domain Name). The command hostname -f should report a host.domain type of name. It is preferred you use the domain you are sending from
    3. Ensure postfix is installed (and sendmail is not)
    4. Edit /etc/postfix/sasl_passwd and enter SMTP_DOMAIN, username and password as per the transactional email platform.
    5. chmod 600 /etc/postfix/sasl_passwd
    6. psotmap /etc/postfix/sasl_passswd
    7. edit /etc/postfix/main.cf and add the following to the bottom of the file
# enable SASL authentication
smtp_sasl_auth_enable = yes
# tell Postfix where the credentials are stored
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
# use STARTTLS for encryption
relayhost =<refer platform info>
## For mandrill
smtp_use_tls = no
## For sendgrid
smtpd_tls_security_level = may
smtp_tls_CAfile = /etc/postfix/ssl/gd_bundle-g2-g1.crt
## For Anazon SES
smtp_use_tls = yes
smtp_tls_security_level = encrypt
smtp_tls_note_starttls_offer = yes
  1. restart the postfix service
  2. test by sending an email and watching the result in /var/log/maillog

Minify css and js for Magento as a build process

How to improve page load speed without server overhead so you can serve more pages.

Need for a build?

Magento being written in php, an interpretive language, the need to build is not essential for deployment. Moreover, since many small store owners are not technical or do not have a full time technical team, solutions that just work inline are preferred. For example, using plugins for css and js minify, or transfer to CDN as and when needed inline, or even use Google’s excellent pagespeed plugin.
Unfortunately, each one of these inline steps though improve page load speeds, result in a ever-so-slight slow down of the server each time. On a high traffic site, this results in inconsistent performance and user experience. We even zip the static content in .gz files so the web server (nginx in our case) does not have to spend a few milliseconds each time – assuming ofcourse you do not have a CDN that can zip.

Grunt, the task builder

We have used Grunt ( http://gruntjs.com/) as a task builder. Grunt is a popular javascript task builder written in nodejs. We use grunt to do many release oriented activities – packaging a release, installing a release, minify css, js, etc. In this article – a first of a series we plan – we will go through the process of installation of grunt and offer a solution to minify js and css flles as well as optimize images in the skin directory.

Installing grunt

  • Install nodejs and npm curl -sL https://raw.githubusercontent.com/nodesource/distributions/master/rpm/setup_4.x | sudo bash - sudo yum install nodejs npm
  • Install grunt sudo npm install -g grunt-cli
  • Download our Gruntfile.js and related code mkdir /scripts cd /scripts git clone https://github.com/luroconnect/gruntformagento.git cp –r gruntformagento/src/* .

Run grunt to minify css and js (and more)

cd /scripts grunt optimize Typical output : Running "copy:skin" (copy) task Created 229 directories, copied 1769 files Running "copy:js" (copy) task Created 197 directories, copied 893 files Running "uglify:skin" (uglify) task >> 30 files created. Running "uglify:js" (uglify) task >> 301 files created. Running "cssmin:skin" (cssmin) task >> 149 files created. 2.34 MB ? 1.77 MB Running "imagemin:skin" (imagemin) task Minified 1412 images (saved 400.26 kB) Running "compress:skinjs" (compress) task >> Compressed 32 files. Running "compress:skincss" (compress) task >> Compressed 155 files. Running "compress:js" (compress) task >> Compressed 332 files. Done, without errors. What is done by optimize :
  • Create 2 directories skin.min and js.min initially with identical content as skin and js respectively
  • Run the minifyfor css and js on the skin.min and js.min directories. .min.js files are not minified.
  • Run image optimizer on skin (png,jpeg)
  • Generate .gz gzipped files – for static delivery of gzip. See note below on nginx configuration.

Update Magento Web URLs

Update the Magento unsecure and secure skin and js URLs to point to skin.min and js.min respectively where minified content is kept.

Update nginx configuration

nginx configuration to load .gz static content if it exists #/* static content can have expiry set to long */ location ~* \.(jpg|jpeg|gif|png|css|js|ico|swf|woff|woff2|svg|TTF)$ { gzip_static on; #access_log off; log_not_found off; expires 360d; } Gzip_static on tells nginx to serve the .gz file of a static file it exists rather than nginx compressing it.

Run optimizer on images in media/wysiwyg

grunt media copy optimized images from media.min/wysiwyg to media/wysiwyg manually

Conclusion

We firmly believe in creating a documented release process. And Grunt with our Gruntfile.js goes a long way in making this a reality. In this article we have introduced the minfication, image optimization and gzip compression of static files. Try it and let us know if you have any suggestions.
This script can be run directly on the live server, but make sure you do it at a low traffic time.