Site Speed

Supercharge your Magento with a Varnish cluster

What is a varnish cluster?

A varnish cluster is a set of varnish nodes, each in a different geographical location, in front of the same Magento backend.

As shown a Magento hosted in the US East region can serve varnish nodes from across the world. Access to the website from each region is directed towards the nearest varnish, benefiting from lower latency and faster page loads.

If you serve customers in different regions – internationally or across the USA, your store can benefit from a varnish cluster.

Varnish and Magento performance

Magento 2 was architected to work with Varnish for improved performance. A typical webpage – category listing or product detail page – when returned from a cached varnish page (called a HIT in varnish) typically has a TTFB of a few milliseconds. An uncached page (also called a MISS in varnish) results in TTFB of a few seconds. Optimizing a MISS is very crucial, but we will not cover that in this article. We believe varnish is one portion of optimizing your website.

What is Network Latency

Latency is the time it takes for a network packet to go from your computer to the server with a request and come back with a response – assuming the response from the server was available immediately. When a page scores a HIT in varnish, the response is almost immediate from the server. Any  Time To First Byte (TTFB) recorded on the browser can be attributed to network latency.

If you market your website to many regions around the world, and you host varnish in a single location, your visitors may be faced with higher latency. When latency of access is in a few hundred milliseconds, it becomes the bottleneck and needs optimization.

The below information from https://wondernetwork.com/pings/ gives an idea of expected latencies. To read the table use the row header as source and column header as destination. So, a ping latency from Los Angeles to Mumbai would be 267 msec.

What is the cause of latency?

Core latency is a function of distance – even light will take ~40msec to travel 13,000 km – the distance from New York to Mumbai. A network packet travels through wires at that speed, and network wires are much longer than a straight line. Moreover, a response packet has to travel all the way back.

Latency is also caused as traffic goes through network equipment. The number of switches / servers a packet has to go through depends on the network and service providers – yours as well as at the server you are connecting to.

A varnish cluster architecture

Using a GeoIP based DNS service such as AWS Route53 a users request for a domain is redirected to one of several IPs. The Magento backend is in the “default” region – where maximum traffic is expected. A varnish in the regions desired – shown US East and Europe below.

Geo-IP based DNS

A geoip based dns router such as AWS route53 can help direct traffic to the nearest varnish node based on the guessing the country the IP requesting name resolution is coming from. So users on a browser say in Australia would be directed to be served from the varnish node in Australia and one from the west states of California would be directed to the varnish there.

Since IP to region or country can never be so accurate, it should be possible for all varnish nodes to serve customers from any region. Specifically, a language or currency switcher should be available on the website.

Magento 2 supports multiple varnish nodes out of the box

An important feature in using a cache in production is its need to automatically and quickly clear contents on demand from the user and application. For example, when a product goes out of stock, the corresponding page should display the out of stock label. Magento uses a tag-based system to flush appropriate content from the cache. Magento allows setting up multiple varnish hosts and a tag-based cache clear is sent to all the hosts.

bin/magento setup:config:set --http-cache-hosts=<varnish internal ip>:6081,<varnish internal ip 2>:6081

Refer : https://devdocs.magento.com/guides/v2.4/config-guide/varnish/use-multiple-varnish-cache.html



Challenges in a varnish cluster

A varnish cluster is more complex to manage

Ensure the varnish vcl files, front end security configuration (WAF, rate limiting, etc) is managed and kept in sync on all edge nodes.

Managing includes monitoring to ensure none of the servers go down. Now, your site can be down in a specific region for example. Typical monitoring tools such as Pingdom would not work. A purposeful monitoring solution is needed.

A varnish cluster costs for additional servers

Since these would be frontend servers, the amount of RAM and their network speed requirement would depend on what traffic they get.

Number of varnish nodes

Increasing the number of nodes in a varnish cluster does not always help in improving site speed. That is because each varnish node has a different hit ratio. A lower hit ratio leads to more users getting the latency and performance penalties combined – due to a varnish MISS. Traffic pattern and latency have to be taken into account to decide on how many nodes to use in a cluster.

Difficult to warm the cache

Given the distributed nature of the cache, warming each cache independently takes more resources on the server side as well as some changes to the way a cache warmer works.

luroConnect : A modern cloud hosting platform

Schedule a call of a free evaluation and demo!

  • Is horizontal scaling manually or with autoscale right for you?
  • Evaluate if a varnish cluster will help your website performance
  • Show managed dev, staging and production environments
  • How we measure application performance every minute with alerts for a slowdown
  • Can your hosting costs be optimized?
  • How improved hosting can lead to better ROI!

On Magento Cloud? We have special offers if you switch your enterprise license to luroConnect managed AWS cloud.


Magento Cloud and Fastly

It is important to discuss the Magento 2 cloud decision to use Fastly as a frontend.

 

Magento 2 cloud pro version architecture is given below. (Reference : https://devdocs.magento.com/cloud/architecture/pro-architecture.html)

The architecture uses Fastly as a Full Page Cache. Varnish is not installed on the Instances in AWS.

Fastly is a CDN that uses varnish. With the Magento 2 Cloud integration, a custom plugin is used on Magento along with a custom vcl file that runs on Fastly.

Fastly has many “POP” locations. As per fastly documentation, there are 20 POP locations in North America, https://www.fastly.com/network-map

mostly in the USA. Each has its own varnish cache. When a page is not in a POP, it fetches content from the origin. A single page may have to be rendered 20 times for each POP location in North America.

Drawbacks of this architecture from a varnish cluster perspective

  • More POPs do not lead to better experience as a higher percentage of MISS on varnish results in a worse experience for more users.
  • As POPs increase the load on Magento infrastructure increases.
  • It is not possible to use a cache warmer to warm the Fastly cache.

What next?

An ideal situation would be a layered varnish configuration – each “satellite” varnish node serving a local user, caching a subset of the “main” varnish node, reducing the penalty of a varnish MISS.

Share your thoughts here or on social media.


Choosing Hosting platform for Magento : Digital Ocean

One in a while you may want to check on the technology and options available for Magento hosting – technologies change, hosting costs reduce, life becomes easier for you and the merchant. In this multi-part series we will explore cloud platforms and their suitability to hosting a production Magento website.

In this article we will explore how seriously you should consider Digital Ocean as a platform for hosting.

Digital Ocean (DO) offers VMs it calls “droplets”. With an excellent blog and a friendly attitude towards developers, DO is a serious hosting contender. Many developers naturally recommend using DO to merchants for hosting Magento. How real is it? Let me explore a few pros and cons.

DO Choice of vCPU and memory

DO offers “shared CPU” and “dedicated CPU”. For eCommerce hosting we always prefer dedicated CPU. Shared CPUs work differently on all cloud providers and quite often when you need it the most, you do not get enough. Each vCPU is a Intel hyper-thread. As of this writing (July 2020), we see use of Intel Xeon Gold 6140 @ 2.30 GHz with DDR4 memory at 2.6 GHz. Our simple memory speed test showed about 1.8GHz effective throughput to memory.

DO disk : Love / hate relationship!

I love the disk speed – you get good SSD performance with no throttling. Even for attached storage. Unlike other cloud platforms, you don’t have to juggle with figuring out how many IOPS you need, once you understand for that platform how IOPs translates to speed.

To test the disk I use a simple but effective method to test the disk speed. I create a file with /dev/zero using dd of 1GB. dd gives me the write speed. Here is the command I use :

dd if=/dev/zero of=$tempFile bs=1M count=1024 conv=sync oflag=direct
tempFile has the path to a temp file in the mounted disk I am testing

For both direct disk as well as mounted block device we see 200-350 Mbps disk speeds across all droplets our customers use. This is the best we have seen in cloud platforms. Physical hardware can give upto 450Mbps speeds.

So, why the hate?

We have seen disk crashes – thankfully on staging servers. So, when it comes to production servers, we always recommend customers to have a DR plan to minimize loss of data when this happens. Our suspicion is that the storage is not in a managed RAID, hence a disk crash is a droplet specific event.

Network

File transfer speed test

: A large file transfer using scp on the internal network is done in about 130 MB/sec – about a 1 Gb/sec speed.

NFS

NFS of block storage performs poorly. We use nfscache so most reads are served from cache. However, if there is a need for a large number of reads or writes, the performance can drop dramatically.

Examples of NFS hurting performance

Magento 1 : As images are created in the frontend, there is a check to see if an image file exists before an image url is included in a page. This invariably results in slowdown.

Magento 1 and 2 : large log files. Magento stores log files in a shared folder. Multiple lines per hit of logs will result in slowdown.

Experience :  We were in the process of taking a Magento 2 website live that had about 2500 errors written to a log file. The issue was related to M1 migration resulting in some attributes not defined in M2. App servers saw 30% CPU in I/O wait and the site came to a crawl. Php access logs showed under 10% CPU utilization.

Non availability of autoscale

DO does not come with an autoscale option (you could get autoscale in their Kubernetes which we are not considering here). This means you may have to keep capacity for a holiday sale for example.

Managing server costs

Server costs even when you shut them down

DO charges for servers that are shut down. If you do not want to be charged fully for a server, storing an image is an option.

Recommendation

We host customers on Digital Ocean if they do not have a need for scaling and are willing to have a DR plan. This generally increases the cost of the overall solution.
Also stores that do not need scaling such as :

  • PWA – Vue-storefront based stores which use nodejs
  • A high hit ratio varnish FPC magento store (depends on many factors, but essentially requirement for scaling multiple app servers is less likely)

 

How Magento can get near 0 downtime deployment

Factor III of the 12 Factor App says “Store config in the environment”.

12 Factor App is what devops lives by – a set of 12 principles written by Adam Wiggins for predictable web app deployments.

Storing configuration in environment, separate from code has the advantages of reliable deployment along with reduced time to deploy. It allows separation of the build stage from the deploy stage, with some deploys being just a change in a softlink to the web root folder.

Historical preview : Magento 1

Magento 1 did not have much of a build process – js and css were not versioned, magnification was “online” first access based as was database upgrade information, configuration was stored in the database.

The most reliable way to go from a dev configuration to a live configuration would require a set of known steps that would work or changes directly to the database.

luroConnect developed its own build and deploy process. In our build step we

  • get source code from git
  • minify css and js files in the skin and js folders using a grunt based process
  • set appropriate file ownership and permissions

During the deploy phase, we

  • Copy app/etc/local.xml from a secure deployment configuration area (our environment)
  • modify the core config data to add a version string in the skin and js URLs
  • access the website once through the index.php to cause the update scripts to run

Deploy process is of course run with the site in maintenance – we prefer to do this at the nginx level. Mostly it is a small blip.

Historical preview – pre Magento 2.2

Early Magento 2 builds were similar – except there was some help from the bin/magento command. Our deploy process did not need to version the static access anymore. Plugin enable / disable was given via config.php. Our deployment environment contained env.php.

However, developers had to manually configure and experiment with some options.

Site bringup required devops to access the admin panel or update the database with custom sql – enabling varnish, setting up CDN with a static URL, etc.

Magento 2.2 and beyond

Magento adopted the direction of the 12 factor app and presented in Magento Live UK 2017 a new set of features that would help in ensuring an ability to split the application configuration and environment configuration. Application configuration was defined in app/etc/config.php which is advised to be in git and hosting environment and secure details are kept in env.php which should not be kept in git.

It is a slightly weak conformance – as commented by 12factor app “This is a huge improvement over using constants which are checked into the code repo, but still has weaknesses: it’s easy to mistakenly check in a config file to the repo; there is a tendency for config files to be scattered about in different places and different formats, making it hard to see and manage all the config in one place. Further, these formats tend to be language- or framework-specific.”

Magento has fixed this in 2 ways

  1. The language specific aspect is addressed to some extent in Magento by allowing to use bin/magento cli to edit env.php for sensitive data. The config:sensitive:set directly writes to env.php. These commands no not require the database, hence, can be set in a pre-deploy step.
  2. Use of scoped environment variable names. These would be set in Nginx configuration or an include file such as fastcgi_params.

However, there is no documented way to set database details – except to manually edit the env.php file.

The app:config:dump command

A great help in maintaining a known configuration of the application (which 12factor app suggests be committed to git). This ensures communication between developer to operations.

The app:config:dump command writes to config.php and env.php. While config.php is suggested to be committed to git, env.php should not be committed to git.

If a value is in config.php, the Magento admin panel does not allow the parameter to be edited. This locking helps with giving stability to the application configuration. It ensures the application is developed and tested with a known configuration.

The figure alongside shows the suggested flow.

Suggested flow for using app:config:dump

Why is Magento deployment yet keeping site in maintenance?

However, we find that even after 2 1/2 years of announcement, the acceptance and understanding of these features is weak. Leaving websites in maintenance mode as code is deployed.

Developers are failing to maintain a discipline to own the configuration or devops to understand the application’s build and deploy process.

There are some practical problems as well. An eCommerce manager would like to have control on the live website on say, when backorders would be allowed storewide. Since this is locked in config.php, this request has to go through developers or devops.

luroConnect near 0 downtime deploy

luroConnect’s Magento 2 build is in a pipeline – such as a bitbucket pipeline. A commit triggers the pipeline that does the following

  • composer install (with the compose cache to speed this process)
  • bin/magento setup:di:compile
  • bin/magento setup:static-content:deploy

The contents are then tarred and sent to the staging and production servers.

Upon deploy the contents are untared, deployment related files like env.php are copied, media and var are softlinked. The web root softlink is changed to point to this new release. The process is slightly more complicated when multiple autoscale instances are running, as running instances are replaced with ones with new code.

If required the bin/magento setup:upgrade command is run and only then is it required to keep the site in maintenance.

Would you like to switch to a modern hosting platform?

Schedule a call of a free evaluation!

With features like ~0 downtime code deploy and autoscale to reduce your hosting costs, luroConnect offers you unparalleled hosting environment for Magento.

Schedule a call and we will show you how we can

  • Improve your hosting, possibly with autoscale
  • Have a managed dev, staging and production environment
  • Server performance measured every minute with alerts for a slowdown
  • A multi point health check every day
  • Optimized hosting costs

Magento emails & SMTP

Emails are a very critical portion of an eCommerce website. There are 2 types of emails that are used – transactional and promotional.

Transactional emails are emails the eCommerce system sends for regular customer activity – such as order confirmation email. It may also include abandoned cart email, wish list reminders, password change, etc. If a website visitor action causes an email to go out, it is a transactional email.

In this article we will discuss the best way to configure transactional email sending in Magento. This article applies to both Magento 1 and Magento 2. The focus is on what happens on production systems.

The SMTP Plugin

Traditionally Magento agencies and developers install a “SMTP Plugin”. A simple search in Magento marketplace for SMTP reveals many options. Some are general plugins that allow you to connect to any SMTP provider, others are from vendors such as SendGrid.

Advantages of such plugins are

  • Easy setup
  • Developer testing is easy
  • Websites on shared hosting have to use such a plugin

However, for growing websites, this is a particular problem

  • Outgoing ports from the application server may not be open
  • Many SMTP providers have restrictions such as rate limit on email sending or number of connections. Some SMTP plugins may not handle this condition well – just reporting the error vs trying to send again.
  • They may not be able to alert when email sending is down
  • When upstream providers’ SMTP service is down or slow, cron will slow down as each email times out on sending. (Magento sends emails in cron).

Using postfix

By default, Magento sends emails to the local system where php is running. Linux systems have many options for receiving such requests and sending out the emails. A popular software is postfix. While postfix can directly send emails to the recipients’ email systems, we are interested in the “relay host” mode of postfix. In this mode, postfix acts to relay emails sent to it from Magento to the upstream provider – such as SendGrid.

Advantages of postfix

  • Postfix has an inbuilt queue. When the upstream server’s rate limit is reached, a “defer” status is returned. Postfix understands this and it will queue deferred emails and keep retrying. Emails are not lost.
  • Magento cron will deliver to local postfix in a matter of milliseconds, freeing up cron to perform other activities.
  • Connections can be pooled so if many emails are in queue they will be pooled together and sent using a single connection. Some SMTP servers, notably gmail, have a rate limit on the number of connections.
  • Log files that can be checked by system admins. Log files can also be sent to a log collection tools. Advanced commands help checking the queue, even trimming it using conditions such as email subject.
  • Flexible configuration allows many features – on a staging server for example, only whitelisted recipients can receive emails, preventing accidental trigger of email to customers. Or emails to known spammers can be filtered by email id or domain.

Installation and configuration

Installing postfix is easy – the package is part of any linux system. For RHEL based system like CentOS :

sudo yum -y install postfix
sudo yum -y install cyrus-sasl-plain cyrus-sasl-md5

Configuring postfix to act as a relayhost.

Click on the tab below to see configuration information for the specific SMTP service being used. Commands below should be run as root (or sudo). Ownership of files created, especially the password files is assumed to be root.

  1. Sign in to SendGrid and go to Settings > API Keys.
  2. Create an API key.
  3. Select the permissions for the key. At a minimum, the key must have Mail sendpermissions to send email.
  4. Click Save to create the key. SendGrid generates a new key. This is the only copy of the key, so make sure that you copy the key to clipboard and use if for the next step.
  5. Create the hashmap from the key
    echo [smtp.sendgrid.net]:2525 apikey:{{{PASTE}}} >> /etc/postfix/sasl_passwd
    postmap /etc/postfix/sasl_passwd
    rm /etc/postfix/sasl_passwd
    chmod 600 /etc/postfix/sasl_passwd.db
  6. Add to postfix configuration file main.cf
    cat >> /etc/postfix/main.cf <<EOF
    smtpd_tls_security_level = may
    # enable SASL authentication
    smtp_sasl_auth_enable = yes
    # tell Postfix where the credentials are stored
    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
    smtp_sasl_security_options = noanonymous
    header_size_limit = 4096000
    relayhost = [smtp.sendgrid.net]:587
    smtp_tls_CAfile =/etc/pki/tls/certs/ca-bundle.crt
    EOF
  7. service postfix restart
  1. Signup for maligun account.
  2. Get credentials for maligun. Mailgun has an option for SMTP username and password under the Domains section of the mailgun portal.
  3. Create the hashmap from the key
    echo [smtp.mailgun.org]:2525 {{user}}:{{password}} >> /etc/postfix/sasl_passwd
    postmap /etc/postfix/sasl_passwd
    rm /etc/postfix/sasl_passwd
    chmod 600 /etc/postfix/sasl_passwd.db
  4. Add to postfix configuration file main.cf
    cat >> /etc/postfix/main.cf <<EOF
    relayhost = [smtp.mailgun.org]:2525
    smtp_tls_security_level = encrypt
    smtp_sasl_auth_enable = yes
    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
    smtp_sasl_security_options = noanonymous
    smtp_tls_CAfile =/etc/pki/tls/certs/ca-bundle.crt
    EOF
  5. service postfix restart

Amazon SES is the most cost effective SMTP provider to our knowledge. However, it is also the most difficult to setup as it requires you to verify sender emails and sandbox mode (which we will not consider here).

  1. Login to your aws account
  2. Select “Simple Email Service” from the services
  3. Select Domains from the left menu
  4. Click “verify a new domain”
  5. Enter domain and click on verify
    SES will generate the TXT records you need to add to your DNS

    SES will also generate the DKIM CNAME record to be added to your DNS
  6. Ensure domain verification green status
  7. In the SES main screen, select “Email Addresses” from the left menu
  8. Enter the sender email address. Sender email address are set in Magento. Depending on your Magento configuration different emails may need to be configured. Each email needs a mailbox as a confirmation email is sent to it.
  9. Refer https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html to create SMTP credentials. Note username and password.
  10. Depending on the region your SMTP endpoint (relayhost setting in the main.cf below) will be different. Refer https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-connect.html
    Note : The region of SES is unimportant in that there are no restrictions from where the email is sent. Your Magento instance can be anywhere – not even on AWS.
  11. Save password
    echo [email-smtp.us-west-2.amazonaws.com]:587 USERNAME:PASSWORD >> /etc/postfix/sasl_passwd
    postmap /etc/postfix/sasl_passwd
    rm /etc/postfix/sasl_passwd
    chmod 600 /etc/postfix/sasl_passwd.db
  12. Update main.cf
    cat >> /etc/postfix/main.cf <<EOF
    relayhost = [email-smtp.us-west-2.amazonaws.com]:587
    smtp_sasl_auth_enable = yes
    smtp_sasl_security_options = noanonymous
    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
    smtp_use_tls = yes
    smtp_tls_security_level = encrypt
    smtp_tls_note_starttls_offer = yes
    smtp_tls_CAfile =/etc/pki/tls/certs/ca-bundle.crt
    EOF

Advanced configurations

Postfix relay gateway

In a more mature environment, each php server may not be able to send on SMTP ports. Even in the presense of a NAT gateway, we would recommend a main postfix gateway to ease logging and prevent loss of emails when an instance is lost in autoscaling. With postfix we can create a relay – php servers are configured to send email to a designated email gateway server which in turn relays to the SMTP upstream server.

Sending promotional emails

Promotional emails should ideally not originate from your website – instead they should be from your marketing tools. However, Magento does support sending promotional emails. To improve deliverability of crucial transactional emails, we recommend using a different sender email domain for promotional emails. If this is configured in Magento, postfix can handle sending promotional emails from a upstream service different from the transactional service. Assuming username / passwords have been created in the SMTP service for the promotional domain, the following configuration allows selecting the relayhost to use based on the sending domain.

cat << /etc/postfix/main.cf >>EOF
smtp_sender_dependent_authentication = yes
sender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_map
EOF

cat << /etc/postfix/relayhost_map >>EOF
user@promodomain  [smtp.xxx.com]:{{port}}
EOF

Add the new domains passwords as the first line in /etc/postfix/sasl_passwd

user@promodomain  {{user}}:{{password}}

luroConnect and Magento emails

The configurations described above are abridged versions of what is used in luroConnect. We use the advanced configuration with a designated server allowed to communicate on port 587 to the outside. We also use secure keys to ensure passwords are not saved in chef cookbooks / configurations. Instead they are stored in a secure key store.

Interested in knowing about the advanced architecture of luroConnect?

Fill the form below and we will contact you.


My company owns the Magento site
Yes, I am a developer
I represent a Magento Agency

Reactions to this post on linkedin

Free as in freedom, not free beer!

Introduction

The open source community has had this saying for long, though there are many, including myself, who do not understand what this difference means.

With recent changes to open source license agreements, this difference has come to the fore. For example, changes in the open source licensing of redis and mongodb has restricted how AWS and other cloud providers can conduct their business. Directly relevant to eCommerce merchants is the effect to open source Magento since Adobe’s acquisition of Magento and the path that has been followed

Magento Open Source vs Enterprise

Adobe’s business reason to have Magento is (in my opinion) to complete their offering. Adobe has seen a huge, public and successful transition from their traditional business of one-time purchase of packaged software to a subscription and even a SaaS model. With the acquisition of Magento and the Commerce Cloud licensing model, Adobe clearly thinks Magento should be packaged with hosting – hence the word Cloud in their offering and their pricing of the Enterprise license that includes cloud hosting. This transition is seen in SAP’s hybris commerce offering that includes hosting to make SAP Commerce Cloud. Unlike SAP hybris though, Magento is open source.

If you accept the Adobe Magento Commerce Cloud offering, you submit to the fixed set of features that are offered by their cloud or subscribe to a SaaS service for integration – though sometimes even that requires qualification or may not be possible.

For example, if you want PWA, you are limited and have to wait for the PWA Studio. If you want an improved search interface, you are limited to their choice. Similarly, for CDN, image optimization and Web Application Firewall, you are limited to Fastly, Adobe’s choice in the matter.

Or, perhaps, you use a plugin that is connected to a SaaS service.

When a Magento website is self-hosted though, the choice was to install a plugin or enhance the code that may require a service which has to be hosted. In the examples above, you may want to use vue-storefront or use one of many systems for search or use ImageMagick as an image optimization solution.

From Free Beer to Freedom!

The earlier licensing model of Magento pushed the decision to a “board level” – companies like ours always take the supported version. Mostly the open source version of Magento was attractive to those who were attracted by the “free beer”.

However, now the decision is that of freedom – since the paid version comes with restrictions.

If you feel guilty of being a taker of open source, you can sponsor community commits back to the open source Magento. The community participation is not negligible. Matt Asay of Adobe suggests it may be as high as 50% in this article.

(Indeed there are community participants who think Adobe is gaining from open source contributions, but that is a different blog article).

Shameless plug!

Full stack managed hosting support from luroConnect, gives you the benefit of supported opensource and the flexibility to build your own solution around it. The entire stack is based open source – from linux to Magento – including nginx, ModSecurity, redis, elasticsearch, sphinx and ofcourse mysql. (We relunctantly also allow ioncube encrypted plugins as well). Coupled with a release process from your git. Hosted on any cloud or open hosting providers – in the age of Uber you don’t need to own your data centers. Our multi-layered security approach and proactive monitoring comes standard. With additional features like a disaster recovery plan, image optimization, peep-hole maintenance and a dashboard to monitor and control key tasks such as code deployment or indexing, we bring peace-of-mind to Magento hosting. Check out our pricing and you can connect with us.

Magento Open Source vs Commerce Cloud

Magento Commerce Cloud does offer additional features. See alongside (zoom for a larger view) for a comparison taken from magento.com. Some key features like WYSIWYG editor will never be released in open source.

However, not everyone needs all of the features and some of these features are available from other plugin vendors or custom development from the many certified and non-certified agencies.

Infact, even if you are a Commerce Cloud customer, you will need customizations and potentially more plugins and even 3rdparty SaaS services to have a fully working store.

Conclusion

Magento Open Source is now a very viable option for all stores – brands and high volume stores included. With many options to customize and integrate, you have the freedom to make your own best of breed solution and not be restricted by the Adobe environment. With Managed hosting service you can get optimized and scalable websites.

PWA - Progressive Web Apps

Progressive Web Apps in Magento 2 – First Opinion

Magento released 2.3 and that included PWA Studio to help build progressive web apps.

Progressive Web Apps is a technology that allows a website to act like an app on a mobile device – in that it does not need continuous connectivity to the internet. This gives the benefit of easy and quick navigation.

When it comes to Magento software technology, there is no better person than Hashid Hameed, CEO of Codilar to get some first hand information. Long before the official release, their team has been playing around with PWA technology so they are ready when the official announcement actually came. Here is excerpts from the interview :

luroConnect : What type of Magento stores is PWA most suitable for?

Hashid : PWA is suitable for all types of stores regardless of the industry and target audience. However it will be a must-have in the near future for stores who have significant mobile audience. With the growing experiential demands of the modern shopper and limited reach & high cost of native apps, PWA will become the saviour for such stores.

luroConnect : In order to make the most of this technology stores have to upgrade to Magento 2.3 – how much of a effort is that? (Hey we just moved from 1.x not long ago!)

Hashid: Haha, you don’t have to worry. It’s just like another incremental build to the existing platform. Magento 2.3 is one of the most promising upgrades so far to the platform. It adds in a a lot of new features. However possibilities of conflicts with custom code and non-supported extensions cannot be ruled out.

luroConnect : OK, I am on Magento 2.3, how long before I can get a PWA app up and running?

Hashid : Magento has released a PWA Studio for developers to speed up the PWA development. Roughly, a PWA can go live in 2-3 months for a usual Magento store.

luroConnect : Will my PWA app share my website theme?

Hashid : Not really. PWA will have its own theme. Magento has a default theme called Venia for PWA.

luroConnect : Do I need to tell my website visitor to install an app?

Hashid : Certainly not. Not having to install an app is one of the biggest advantage of PWA. You get an app-like experience without installing an app, how cool is that? Plus users can easily add the PWA to their homescreen and use it like a regular app. Check out the demo below and see for yourself.

luroConnect : How is PWA different from AMP?

Hashid : AMP is great for blogs and news websites where the main content is usually static. Amp comes with a lot of feature restrictions which is simply not suitable for a highly dynamic needs of an online store. For example, to be accepted as an AMP page and enable Google AMP network to deliver the cached AMP pages, developers must use Google’s library of approved HTML, Javascript, CSS and analytics tags. You will not be able to use A/B Testing, personalization, recommendations etc.

On the other hand, PWA is completely in the hands of the developer. There is no restriction at all.

luroConnect : Does my desktop experience enhance with PWA or is it just mobile?

Hashid : Though PWA talks more about mobile experience, most benefits can be reaped for desktop as well. Faster load time (no page loads), easier  product discovery, intuitive navigation are some of these. However, majority of the merchants are mostly likely to roll-out the PWA only for mobile audience initially.

luroConnect : If I have updated the category or price how quickly can my PWA store know?

Hashid : Ideally, there should not be any delay as the PWA directly talks to Magento’s new GraphQL API. If the API responses are cached for even faster response by the developers, there could be a delay. But this is totally in the hands of the developer. It depends on the store requirement.

luroConnect : How about stock updates? I want that information to be out in the forefront.

Hashid : The above answer applies here as well.

luroConnect : Can a checkout happen without internet connectivity?

Hashid : Yes, in an ideal PWA, the complete checkout can happen offline. The order will be synced in Magento when the internet connectivity is back. Just like, how some of the cloud POS systems work offline. However, having an offline checkout could lead to other complexities in business such as ensuring stock availability.

If you want to try out the new feature, Codilar has arranged a demo site https://www.codilar.com/blog/magento-2-pwa-demo-venia/ If you want to get in touch with Hashid email to hello@codilar.com

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

The DIY Guide to Magento Hosting

Introduction

Quite often we depend on the hosting provider to give us a fast website. But with the availability of affordable hosting options in terms of VPC that requires some DIY skills to setup.

In this DIY guide we help you setup a high performance Magento hosting environment. This guide includes configurations and tips. As with any DIY, you need to have some knowledge and tools. We expect that you have a basic working knowledge of linux and its commands, access to root, ability to install standard packages  as well use of a text editor in linux – such as vim. An idea of users, uids, groups, gids and process will be useful for trouble shooting if you need.

Much like getting a car with a good engine is only the starting point if you love fast cars, so is a good server if you want great page load times. A good transmission would be the next thing you need to look for. Transmission in hosting is equivalent to the components of hosting and how they communicate.

Choosing a hosting provider

All hosting providers claim best hardware but sell you on price. How do you compare one from the other? Knowing how they work and some tests may help!

How clouds and virtualization work : Todays virtualization technology allows hosting providers to share a larger computing resource they own amongst many customers. The technology also allows overcommit – much like an airline that would overbook seats in the hope someone cancels, a hosting provider can overcommit resources such as CPU in the hope that not all customers would use all CPUs given at the same time. However, unlike an airline, virtualization technology can deteriorate performance without failing.

Testing the resources : We have a quick way to test the quality of your hardware. We measure the speed of RAM and disk in these simple tests. In each case we write data serially to a chunk of memory or disk and measure the performance. Warning : Free memory and disk are required for the test.

The memory test

(tempDir=`mktemp -d -t linux-benchmark-XXX`; mount -t tmpfs -o size=128m $tempDir $tempDir; (dd if=/dev/zero of=$tempDir/test_ bs=1M count=128 conv=fdatasync &&rm -f $tempDir/test_)

The disk test
This test gives you a momentary result. Run the test multiple times to get a good average. Anything below 150Mbps is slow. A typically SSD disk should give about 450Mbps, good ones above 700Mbps.

tempFile=<drive>/disktest; dd if=/dev/zero of=$tempFile bs=1M count=1024 conv=sync oflag=direct; rm -f $tempFile

The architecture

We will assume a simple architecture – all services on the same server. This is for simplicity and we have seen it to work well with sites on physical hardware with rather consistent traffic in a 24 hour period.

There can be alternate architectures

  • separate db server
  • multiple app servers – service all traffic in front of a load balancer
  • separate app server – say for admin URLs with a path based load balancer

Each of these bring additional complexities in configurations. Please use the comments section below if you have a more complex architecture.

The nginx web server

Note : We use nginx – read our blog article on why we prefer nginx over apache.
The basic nginx configuration for Magento is available
Click here.

## define both backends
## upstream  socketbackend{
  server unix:/var/run/php-fcgi-www.sock;
}
upstream tcpbackend{
  server 127.0.0.1:9000;
}
server {
  listen 443 ssl http2;
  server_name example.com www.example.com;
  root /home/example/www;
  ssl on;
  ssl_certificate       /etc/pki/tls/certs/example_certs/www_example.com.crt;
  ssl_certificate_key   /etc/pki/tls/certs/example_certs/www_example_com.key;
  ssl_protocols         TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";  
  ssl_dhparam /etc/nginx/dhparams.pem;

  access_log /var/log/nginx/example.access.log;
  error_log /var/log/nginx/example.error.log;
  #/* deny should come first */
  location ^~ /app/ { deny all; }
  location ^~ /bin/ { deny all; }
  location ^~ /lib/ { deny all; }
  location ^~ /media/downloadable/ { deny all; }
  location ^~ /dev/ { deny all; }
  location ^~ /vendor/ { deny all; }
  location ^~ /update/ { deny all; }
  location ^~ /var/ { deny all; }
  location ^~ /downloader/ { deny all; }
  location ^~ /admin/ { deny all; }
  location ^~ /phpserver/ { deny all; }
  location ^~ /setup/ { deny all; }
  location ^~ /run/ { deny all; }
  #/* system . files */
  location /. {
    return 404;
  }
  #/* main php handler */
  location / {
    index index.php index.html;
    try_files $uri $uri/ @handler;
  }
  #/* set long expiry for static content. Update types as needed */
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|swf|woff2|svg|TTF)$ {
    access_log off;
    log_not_found off;
    expires 360d;
  }
  #/* directories where you do not want php to be executed from */
  location ~* /(images|cache|media|skin|js|uploads|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
    return 403;
  }

  #/* we you need setup remove from deny list */
  location /setup {
    try_files $uri $uri/ @setuphandler;
  }
  # Rewrite Setup's Internal Requests
    location @setuphandler {
    rewrite /setup /setup/index.php;
  }
  # Rewrite Internal Requests
  location @handler {
    rewrite / /index.php;
  }
  location /pub/static {
    try_files $uri $uri/ @static;
  }
  location @static {
    rewrite ^/pub/static/(.*)$ /pub/static.php?resource=$1? last;
  }
  location ~ \.php$ { ## Execute PHP scripts
    try_files $uri =404;
    expires off;
    fastcgi_pass socketbackend;
    #fastcgi_pass tcpbackend;
    fastcgi_read_timeout 1s;
    fastcgi_param HTTPS $fastcgi_https;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
  }

Php-fpm – the php interpreter

An important factor is to not overcrowd the CPU and memory you may have, as the default configuration does. The main configuration file is www.conf and specifically the section on servers.
Click here is a production php-fpm.d/www.conf file

; Start a new pool named 'www'.
[www]

; Unix user/group of processes
; luroConnect : 2-user system requires the php-fpm and nginx processes run as the same user
; give the nginx group read only access to the hosting directory typically in /home//www
; if a directory needs to be written into by php (such as upload, consider media, var) those directories
; need write permission for groups
user = nginx
group = nginx

; The address on which to accept FastCGI requests.
; Valid syntaxes are:
;   'ip.add.re.ss:port'    - to listen on a TCP socket to a specific IPv4 address on
;                            a specific port;
;   '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
;                            a specific port;
;   'port'                 - to listen on a TCP socket to all addresses
;                            (IPv6 and IPv4-mapped) on a specific port;
;   '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
; luroConnect : Match listen to the value in nginx. Single server system can use socket
; listen = 9000
listen /var/run/php-fcgi-www.sock

; Set listen(2) backlog.
; Default Value: 511 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 511

; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions.
; Default Values: user and group are set as the running user
;                 mode is set to 0660
;listen.owner = nobody
;listen.group = nobody
;listen.mode = 0660
; When POSIX Access Control Lists are supported you can set them using
; these options, value is a comma separated list of user/group names.
; When set, listen.owner and listen.group are ignored
;listen.acl_users =
;listen.acl_groups =

; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
; 
;listen.allowed_clients = 127.0.0.1

; Specify the nice(2) priority to apply to the pool processes (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
;       - The pool processes will inherit the master process priority
;         unless it specified otherwise
; Default Value: no set
; process.priority = -19

; Choose how the process manager will control the number of child processes.
; Possible Values:
;   static  - a fixed number (pm.max_children) of child processes;
;   dynamic - the number of child processes are set dynamically based on the
;             following directives. With this process management, there will be
;             always at least 1 children.
;             pm.max_children      - the maximum number of children that can
;                                    be alive at the same time.
;             pm.start_servers     - the number of children created on startup.
;             pm.min_spare_servers - the minimum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is less than this
;                                    number then some children will be created.
;             pm.max_spare_servers - the maximum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is greater than this
;                                    number then some children will be killed.
;  ondemand - no children are created at startup. Children will be forked when
;             new requests will connect. The following parameter are used:
;             pm.max_children           - the maximum number of children that
;                                         can be alive at the same time.
;             pm.process_idle_timeout   - The number of seconds after which
;                                         an idle process will be killed.
; Note: This value is mandatory.

; luroConenct : We set to static so we can dedicate CPU resources to php
; Create separate pools for processing URLs with different levels of CPU utilization
; For example if you have URLs that do a CURL,  make a curl pool. These will consume less CPU.
pm = static

; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
; luroConnect : we set it to slightly more than the cores available for php.
; 5 is ideal for a 4 cores reserved for php.
pm.max_children = 5

; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 5

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 5

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 35

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;

; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
; luroConnect : Magento and wordpress need a limit as inadvertently there are memory leaks
; set this number based on hits and the avg process size that is available through the status page
pm.max_requests = 500

; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. It shows the following informations:
;   pool                 - the name of the pool;
;   process manager      - static, dynamic or ondemand;
;   start time           - the date and time FPM has started;
;   start since          - number of seconds since FPM has started;
;   accepted conn        - the number of request accepted by the pool;
;   listen queue         - the number of request in the queue of pending
;                          connections (see backlog in listen(2));
;   max listen queue     - the maximum number of requests in the queue
;                          of pending connections since FPM has started;
;   listen queue len     - the size of the socket queue of pending connections;
;   idle processes       - the number of idle processes;
;   active processes     - the number of active processes;
;   total processes      - the number of idle + active processes;
;   max active processes - the maximum number of active processes since FPM
;                          has started;
;   max children reached - number of times, the process limit has been reached,
;                          when pm tries to start more children (works only for
;                          pm 'dynamic' and 'ondemand');
; Value are updated in real time.
; Example output:
;   pool:                 www
;   process manager:      static
;   start time:           01/Jul/2011:17:53:49 +0200
;   start since:          62636
;   accepted conn:        190460
;   listen queue:         0
;   max listen queue:     1
;   listen queue len:     42
;   idle processes:       4
;   active processes:     11
;   total processes:      15
;   max active processes: 12
;   max children reached: 0
;
; By default the status page output is formatted as text/plain. Passing either
; 'html', 'xml' or 'json' in the query string will return the corresponding
; output syntax. Example:
;   http://www.foo.bar/status
;   http://www.foo.bar/status?json
;   http://www.foo.bar/status?html
;   http://www.foo.bar/status?xml
;
; By default the status page only outputs short status. Passing 'full' in the
; query string will also return status for each pool process.
; Example:
;   http://www.foo.bar/status?full
;   http://www.foo.bar/status?json&full
;   http://www.foo.bar/status?html&full
;   http://www.foo.bar/status?xml&full
; The Full status returns for each process:
;   pid                  - the PID of the process;
;   state                - the state of the process (Idle, Running, ...);
;   start time           - the date and time the process has started;
;   start since          - the number of seconds since the process has started;
;   requests             - the number of requests the process has served;
;   request duration     - the duration in µs of the requests;
;   request method       - the request method (GET, POST, ...);
;   request URI          - the request URI with the query string;
;   content length       - the content length of the request (only with POST);
;   user                 - the user (PHP_AUTH_USER) (or '-' if not set);
;   script               - the main script called (or '-' if not set);
;   last request cpu     - the %cpu the last request consumed
;                          it's always 0 if the process is not in Idle state
;                          because CPU calculation is done when the request
;                          processing has terminated;
;   last request memory  - the max amount of memory the last request consumed
;                          it's always 0 if the process is not in Idle state
;                          because memory calculation is done when the request
;                          processing has terminated;
; If the process is in Idle state, then informations are related to the
; last request the process has served. Otherwise informations are related to
; the current request being served.
; Example output:
;   ************************
;   pid:                  31330
;   state:                Running
;   start time:           01/Jul/2011:17:53:49 +0200
;   start since:          63087
;   requests:             12808
;   request duration:     1250261
;   request method:       GET
;   request URI:          /test_mem.php?N=10000
;   content length:       0
;   user:                 -
;   script:               /home/fat/web/docs/php/test_mem.php
;   last request cpu:     0.00
;   last request memory:  0
;
; Note: There is a real-time FPM status monitoring sample web page available
;       It's available in: @EXPANDED_DATADIR@/fpm/status.html
;
; Note: The value must start with a leading slash (/). The value can be
;       anything, but it may not be a good idea to use the .php extension or it
;       may conflict with a real PHP file.
; Default Value: not set
pm.status_path = /status

; The ping URI to call the monitoring page of FPM. If this value is not set, no
; URI will be recognized as a ping page. This could be used to test from outside
; that FPM is alive and responding, or to
; - create a graph of FPM availability (rrd or such);
; - remove a server from a group if it is not responding (load balancing);
; - trigger alerts for the operating team (24/7).
; Note: The value must start with a leading slash (/). The value can be
;       anything, but it may not be a good idea to use the .php extension or it
;       may conflict with a real PHP file.
; Default Value: not set
ping.path = /ping

; This directive may be used to customize the response of a ping request. The
; response is formatted as text/plain with a 200 response code.
; Default Value: pong
;ping.response = pong

; The access log file
; Default: not set
; luroConenct : access log is useful for measuring CPU utilization percent of a request
; lower percent indicates either high mysql or curl calls
; if no curl calls, check the mysql cache and tune
access.log = log/$pool.access.log

;access.format = %R - %u %t %m %r%Q%q %s %f %{mili}d %{kilo}M %{total}C%%
access.format = %R - %u %t %m %{REQUEST_URI}e %r%Q%q %s %f %{mili}d %{kilo}M %{total}C%%

; The access log format.
; The following syntax is allowed
;  %%: the '%' character
;  %C: %CPU used by the request
;      it can accept the following format:
;      - %{user}C for user CPU only
;      - %{system}C for system CPU only
;      - %{total}C  for user + system CPU (default)
;  %d: time taken to serve the request
;      it can accept the following format:
;      - %{seconds}d (default)
;      - %{miliseconds}d
;      - %{mili}d
;      - %{microseconds}d
;      - %{micro}d
;  %e: an environment variable (same as $_ENV or $_SERVER)
;      it must be associated with embraces to specify the name of the env
;      variable. Some exemples:
;      - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e
;      - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e
;  %f: script filename
;  %l: content-length of the request (for POST request only)
;  %m: request method
;  %M: peak of memory allocated by PHP
;      it can accept the following format:
;      - %{bytes}M (default)
;      - %{kilobytes}M
;      - %{kilo}M
;      - %{megabytes}M
;      - %{mega}M
;  %n: pool name
;  %o: output header
;      it must be associated with embraces to specify the name of the header:
;      - %{Content-Type}o
;      - %{X-Powered-By}o
;      - %{Transfert-Encoding}o
;      - ....
;  %p: PID of the child that serviced the request
;  %P: PID of the parent of the child that serviced the request
;  %q: the query string
;  %Q: the '?' character if query string exists
;  %r: the request URI (without the query string, see %q and %Q)
;  %R: remote IP address
;  %s: status (response code)
;  %t: server time the request was received
;      it can accept a strftime(3) format:
;      %d/%b/%Y:%H:%M:%S %z (default)
;      The strftime(3) format must be encapsuled in a %{}t tag
;      e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
;  %T: time the log has been written (the request has finished)
;      it can accept a strftime(3) format:
;      %d/%b/%Y:%H:%M:%S %z (default)
;      The strftime(3) format must be encapsuled in a %{}t tag
;      e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
;  %u: remote user
;
; Default: "%R - %u %t \"%m %r\" %s"
;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%"

; The log file for slow requests
; Default Value: not set
; Note: slowlog is mandatory if request_slowlog_timeout is set
slowlog = log/$pool-slow.log

; The timeout for serving a single request after which a PHP backtrace will be
; dumped to the 'slowlog' file. A value of '0s' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
; luroConnect : use for debugging slow access
; request_slowlog_timeout = 1

; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0

; Set open file descriptor rlimit.
; Default Value: system defined value
;rlimit_files = 1024

; Set max core size rlimit.
; Possible Values: 'unlimited' or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0

; Chroot to this directory at the start. This value must be defined as an
; absolute path. When this value is not set, chroot is not used.
; Note: chrooting is a great security feature and should be used whenever
;       possible. However, all PHP paths will be relative to the chroot
;       (error_log, sessions.save_path, ...).
; Default Value: not set
;chroot =

; Chdir to this directory at the start.
; Note: relative path can be used.
; Default Value: current directory or / when chroot
;chdir = /var/www

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environement, this can cause some delay in the page
; process time (several ms).
; Default Value: no
;catch_workers_output = yes

; Clear environment in FPM workers
; Prevents arbitrary environment variables from reaching FPM worker processes
; by clearing the environment in workers before env vars specified in this
; pool configuration are added.
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no

; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; exectute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
;security.limit_extensions = .php .php3 .php4 .php5 .php7

; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from
; the current environment.
; Default Value: clean env
;env[HOSTNAME] = $HOSTNAME
;env[PATH] = /usr/local/bin:/usr/bin:/bin
;env[TMP] = /tmp
;env[TMPDIR] = /tmp
;env[TEMP] = /tmp

; Additional php.ini defines, specific to this pool of workers. These settings
; overwrite the values previously defined in the php.ini. The directives are the
; same as the PHP SAPI:
;   php_value/php_flag             - you can set classic ini defines which can
;                                    be overwritten from PHP call 'ini_set'.
;   php_admin_value/php_admin_flag - these directives won't be overwritten by
;                                     PHP call 'ini_set'
; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no.

; Defining 'extension' will load the corresponding shared extension from
; extension_dir. Defining 'disable_functions' or 'disable_classes' will not
; overwrite previously defined php.ini values, but will append the new value
; instead.

; Default Value: nothing is defined by default except the values in php.ini and
;                specified at startup with the -d argument
;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com
;php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
;php_admin_value[memory_limit] = 128M

; Set session path to a directory owned by process user
php_value[session.save_handler] = files

; Note : Important to make /var/lib/php/session writable by nginx user
php_value[session.save_path]    = /var/lib/php/session
php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache

SSL Certificate

Getting a https certificate is now easy with letsEncrypt.

The nginx configuration enables http2, which is a newer version of the http protocol. Refer our blog article on advantage of http2 for Magento.

Mysql

There are many flavors available based on the original open source. We use percona mysql. Configurations may vary depend on the version but equivalent should be available.

Database performance depends on cache configuration based on available memory, a fast disk and your Magento indexing settings. Magento sites have a high percentage of reads to writes to the database – i.e. each product is browsed many times compared to the times you change them. As a result, mysql can perform really well as long as the cache hit rates are high. The following parameters determine the cache configuration.

query_cache_type = 1

; set query cache size to total RAM you can assign to query cache. Magento can use a lot of query cache
query_cache_size = 4G

; max size of an individual query that is cached. In Magento the category menu is often the largest query – ensure that is cached
query_cache_limit = 128M

; generally equal to the size of your database – ensures the database is stored in RAM
innodb_buffer_pool_size = 3G

The values depend on your website – such as number of categories displayed in the main menu or the length of the description of a product. The following queries help in determining the cache performance.

show status like 'qcache_hits'
show status like 'Qcache_inserts'
show status like 'Qcache_not_cached'
show status like 'Qcache_lowmem_prunes

Make special note of the parameter Qcache_lowmem_prunes that tells that a query was not cached as there was insufficient memory. This typically means an increase in query_cache_limit or query_cache_size may be required.

Redis Configuration for cache and session

Click here for sample configuration file

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis/redis.pid

# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
# luroConnect - 6379 for Magento cache, 6380 for session
port 6379

# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
#
bind 0.0.0.0

# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# Set server verbosity to 'debug'
# it can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis.log

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility.  Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT  where
# dbid is a number between 0 and 'databases'-1
# luroConnect : set databases to 1 - we use a separate redis process  for each type
databases 1

################################ SNAPSHOTTING  #################################
#
# Save the DB on disk:
#
#   save  
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving at all commenting all the "save" lines.

# save 900 1
# save 300 10
# save 60 10000
# luroConnect - for Magento disable save except for session
# without this a restart will cause recently added items to guest cart to disappear
# or a user to get logged out
# session save every minute 
# save 60
# FPC and Magento cache - do not ssave. Cost of saving in a busy site can be high
# redis restarts only on service start which is not often

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
# 
# Also the Append Only File will be created inside this directory.
# 
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis/

################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#
# slaveof  

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth 

# When a slave lost the connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
#    still reply to client requests, possibly with out of data data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale data is set to 'no' the slave will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO and SLAVEOF.
#
slave-serve-stale-data yes

# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10

# The following option sets a timeout for both Bulk transfer I/O timeout and
# master data or ping response timeout. The default value is 60 seconds.
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60

################################## SECURITY ###################################

# Require clients to issue AUTH  before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
# 
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

# Command renaming.
#
# It is possilbe to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# of hard to guess so that it will be still available for internal-use
# tools but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possilbe to completely kill a command renaming it into
# an empty string:
#
# rename-command CONFIG ""

################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limits.
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 128

# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# accordingly to the eviction policy selected (see maxmemmory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# an hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory 

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
# 
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
# 
# Note: with all the kind of policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy volatile-lru

# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
# maxmemory-samples 3
# luroConnect - set maxmemory depending on use and requirement
# typically for Magento cache and session do not set but monitor
# for FPC set to the max available memory as it tends to grow
# if maxmemory is set, use allkeys-lru pollicy
maxmemory 2G
maxmemory-policy allkeys-lru

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.aof. This file will
# be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.

appendonly no

# The name of the append only file (default: "appendonly.aof")
# appendfilename appendonly.aof

# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush 
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
#
# The default is "everysec" that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving the durability of Redis is
# the same as "appendfsync none", that in pratical terms means that it is
# possible to lost up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
# 
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
# 
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (or if no rewrite happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a precentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
# 
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 1024

################################ VIRTUAL MEMORY ###############################

### WARNING! Virtual Memory is deprecated in Redis 2.4
### The use of Virtual Memory is strongly discouraged.

# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
#
# To enable VM just set 'vm-enabled' to yes, and set the following three
# VM parameters accordingly to your needs.

vm-enabled no
# vm-enabled yes

# This is the path of the Redis swap file. As you can guess, swap files
# can't be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
#
# The best kind of storage for the Redis swap file (that's accessed at random) 
# is a Solid State Disk (SSD).
#
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
vm-swap-file /tmp/redis.swap

# vm-max-memory configures the VM to use at max the specified amount of
# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
# is, if there is still enough contiguous space in the swap file.
#
# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it's
# better to leave some margin. For instance specify an amount of RAM
# that's more or less between 60 and 80% of your free RAM.
vm-max-memory 0

# Redis swap files is split into pages. An object can be saved using multiple
# contiguous pages, but pages can't be shared between different objects.
# So if your page is too big, small objects swapped out on disk will waste
# a lot of space. If you page is too small, there is less space in the swap
# file (assuming you configured the same number of total swap file pages).
#
# If you use a lot of small objects, use a page size of 64 or 32 bytes.
# If you use a lot of big objects, use a bigger page size.
# If unsure, use the default :)
vm-page-size 32

# Number of total memory pages in the swap file.
# Given that the page table (a bitmap of free/used pages) is taken in memory,
# every 8 pages on disk will consume 1 byte of RAM.
#
# The total swap size is vm-page-size * vm-pages
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
#
# It's better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
vm-pages 134217728

# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can't help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
#
# The special value of 0 turn off threaded I/O and enables the blocking
# Virtual Memory implementation.
vm-max-threads 4

############################### ADVANCED CONFIG ###############################

# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
hash-max-zipmap-entries 512
hash-max-zipmap-value 64

# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happens to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
# 
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

Varnish vs Redis FPC

Varnish sits at the edge and caches pages as they are served. Redis FPC sits inside the Magento application and stores the pages as they are served.
A Varnish application gets the variable components either via VCL or ajax. FPC being inside the application can use the block directly inside the page.

Magento 2 supports both varnish and FPC out-of-the-box. Varnish outperforms redis FPC but takes a bit more chutzpah to get working with https. Magento has a good explanation on the setup at https://devdocs.magento.com/guides/v2.2/config-guide/varnish/config-varnish.html

Setting up the hosting user

Magento suggests using a 2-user system for hosting. What this means is that the php files are not writable by the web server or the php process. Create a user that we will call as the hosting user and add nginx to the hosting user’s group

useradd -G nginx <hostinguser>

Setup directory www with Magento.

Ensure media and var folders have permissions for group write but other folders do not

cd /home/;/www
find . -type d -exec chmod 750 {} \;
find . -type f -exec chmod 640 {} \;
find ./media -type d -exec chmod 770 {} \;
find ./media -type f -exec chmod 660 {} \;
find ./var -type d -exec chmod 770 {} \;
find ./var -type f -exec chmod 660 {} \;

Conclusion

We have presented with detailed configuration files in a DIY guide to Magento hosting. If you need to setup multiple servers, the basic idea is similar some configurations may have to change.

Feel free to post your comments and we will improve this guide.

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Watch our webinar on performance and scaling in Magento

Its free!

Using analogy to vehicular traffic we explain performance and scaling in Magento.
Key takeaways

  • Know how to compare hosting options
  • Importance of good code
  • How to scale
  • Tuning Magento

6 tips to speedup Magento websites

Introduction

Website speed is very crucial to conversions. There are many studies that indicate how website slowdown of even one second can cause drop in conversions. And websites that have taken this learning have seen conversions grow. Take Pier 1 for example in this article.

Our tips are oriented towards what you need to do on the server side to improve performance especially during high traffic times, when server response times tend to be inconsistent. These tips will speedup Magento websites.

Tip # 1 : Restrict Bots

When times are a busy, you need to be mindful of who you allow into your store. Restricting BOTs is easy to implement by checking the ‘user agent” field. With lowering of server load this will speedup Magento or help reduce resources.

Tip # 2 : Rate limit hits from the same IP

This is one more reduce tip that may work in your situation. Hits from the same IP – until your target audience is a group that is behind a firewall – will restrict simple robotic attempts to crowd your site – especially if you are running a time bound flash sale.

Bonus Tip : Use CDN to serve static content

During peak traffic, CDNs help in offloading not (just) servers but actually network bottlenecks of your servers. This causes a speedup in Magento by reducing the load on the entire system.

Tip # 3 : Use php7 instead of php5.x

php7 performance is almost 50% higher than php5. We have seen this in processing CPU intensive Magento hits. But, Magento 1.x does not officially support php7 and 3rd party patches. We like the one from inchoo. Magento 2 should be on php7.

Tip # 4 : Consider adding separate servers or pools for your checkout flow (or admin access)

While site performance is important, checkout flow performance is even more important as it is on the business end of your sales funnel. Consider adding a separate checkout servers or pools. They are akin to High Occupancy (HOV) or diamond lanes in traffic. This helps to speedup Magento’s checkout process.

The same concept can be applied to restrict number of resources you reserve for admin access.

Tip # 5 : Manage Magento Indexing

Leaving Magento Indexes to “Update on Save” can get your newly uploaded products available for shopping faster, but can slow down the site! If business permits, move indexing to slightly low volume times of the day (or night). This setting leads directly to speedup Magento front end.

Tip # 6 : Monitor the performance of your caches

A Magento site has many caches and each one helps in reducing the server load and improve response times. But you need to monitor for hit to miss ratios and out of memory conditions. This achieves a speedup in Magento as more cache hits mean faster response. The caches one should look at are php opcache, Magento block cache, FPC (Full Page Cache) and mysql cache.

Conclusion

By reducing, redirecting and monitoring server, the overall performance of a website can be improved.

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Watch our webinar on performance and scaling in Magento

Using an analogy to vehicular traffic we explain performance and scaling in Magento.
Key takeaways

  • Know how to compare hosting options
  • Importance of good code
  • How to scale
  • Tuning Magento

Case Study : How bad code can hurt performance and even break

Many websites, notably eCommerce and WordPress, live with bad code. They end up spending more effort in improving performance by adding more resources. It is like adding more horsepower to an inefficient car engine – will consume more gas and in the long run fixing the engine would be a better thing.

Not all such attempts will succeed, however. Bad code takes many forms. There are times when your application is dependent on an external service. The problem comes when this service is executed inline when your visitor is waiting for your server to respond. The two popular dependencies are sending email (smtp) and curl calls for external http access. (curl is a popular library to access remote servers programmatically).

In this article we will analyse a bad code example we found in production. A customer in India (we manage the Magento site hosting) raised a ticket that their international shipping costs stopped working. Since we take care of their release process, we investigated to see if this was an inadvertent code change that made it to production. We could find no change. Infact, there was no release made from the day the problem started. So, with permission from the customer, we investigated further.

The bad code

They had overridden the default table shipping with custom logic. The primary reason was that they wanted to give shipping costs in USD for US customers. Magento has concepts of base currency and display currency. All calculations are done in base currency (in this case INR) while the charge was in display currency (USD).

 ...
    $to_Currency = urlencode($to_Currency);
    $url = "http://www.google.com/finance/converter?a=$amount&from=$from_Currency&to=$to_Currency";
    $ch = curl_init();
    $timeout = 0;
    curl_setopt ($ch, CURLOPT_URL, $url);
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
 ...

I wondered why it took over 4 years since the bad code was written to break. The reason it finally broke was that google took the URL out of service. The code below would return 0 as it did not find the appropriate tag in the output. So, all shipping was free.

Analyziing the bad code

  • Instead of using Magento’s functions for currency conversion, the developer thought of
    • Accessing the google finance page
    • Scraping the HTML output
  • Might be in violation of Google’s terms of service (programmatic access)
  • The customer has a one step checkout plugin – this code gets called each step, when anything changes. This is slowing down the checkout page.
  • There is no retry or reporting of failure
  • The conversion rate used for shipping may be different for the products, as the site conversion rates are updated daily.

Conclusion

Using curl in code that is accessed as the visitor waits is a bad idea. Apart from slowing down user interaction, the probability of an error increases and then the error has to be handled appropriately.

A non technical guide to scaling Magento (or any other website)

Performance & Scaling of a Magento web site are often confused. As a store owner who may not be technical a close analogy with real life will help in talking to your hosting providers and other experts.

It is no coincidence that hits to a website is called as traffic. We take this analogy further, to explain what factors matter to performance and scaling a website.

Website performance is like a car – higher performance cars drive faster and can cover a distance in a shorter time. Similarly, a higher performant website will serve a page fast. This is often measured as page load speed. A critical component of page load that the server is responsible for, is server response time. Like measuring performance of a car, measuring the page load speed is done in test mode with little or no traffic. Sometimes the performance is measured at a random time without looking at other traffic to the site. That is like test driving a car through traffic.

Scaling is like building highways and roads for the cars to move on. Highways are resources – CPU, memory, network that the hits to a website will utilize. The task of a Magento scaling expert is to architect a system – servers and sizes, services to run on each server, connectivity of the servers and access from internet, etc.

Hits to a website is like traffic of random cars on the highway. Each vehicle seems to have a mind of its own, joining and leaving the highway. Each visitor to the website will take their own journey visiting different pages.

Some Observations

Observation 1 : Like a car cannot drive at its highest speed possible at all times due to traffic, a website too cannot perform at its best best all the time. Understanding the factors that make the website perform at its optimal level all the time would be the task of both the developer and the server architect.

Observation  2: Like in traffic we have vehicles of different performance, in a website all URLs do not perform equally. A category page may not perform the same way as product detail page for example.

Observation 3: Better throughput will be achieved with the same resources, if the vehicle performance is improved – some bottlenecks can be avoided if the vehicles moved faster. Similarly, a better performing website is likely to scale better.

Observation  4: Like in traffic in order to scale one has to find the bottleneck in the highway that is causing the current slowdown, fix it and then look for the next bottleneck. This is a change in the hosting infrastructure and architecture, different from the website performance.

Observation  5: A traffic designers job is to ensure maximum number of vehicles can pass the highway at the best speed for each vehicle. A hosting designers job is to ensure maximum traffic is handled in a way that each hit is best served.

What lessons can we learn from traffic management

Lesson 1: To better manage traffic highway system has to be designed that is scalable. Mostly by bottleneck analysis we can derive what needs to be done. For example, is database a bottleneck, is file system access a bottleneck, etc.

Lesson 2: When traffic increases, possibly beyond the capacity of the highway, traffic management has to account for one more variable – starvation. The amount of time a vehicle has to wait at a metered light to enter the highway. The longer the wait, more frustration from the drivers who will find a better route to their destination.

Lesson : On a highway lanes are drawn. A better hosting will make lanes. The way most hosting providers take traffic is analogous to not having lanes with the hope that the maximun throughput will be achieved by letting hits contend for resources. The operating system stands to decide what process gets to use resources.

What are the recommended steps to achieve scaling?

As a first step to server side scaling, we move the database layer out to another instance or server. The main reason is that it is better to allocate resources in a single server when the workload is similar.

In our multi part series we take you through achieving scaling. The series is aimed at a store owner who need not be technical but is ultimately responsible to take a decision on the store. Until now you had to depend on an expert. However, there are no clear answers and the expert is making judgement calls based on most likely their prior experience. As a matter of fact no 2 webstores so results of efforts vary. This series will make you better informed.

We start by looking at a popular form of scaling – using FPC or Full Page Cache and other types of caches.

In order to help with scale, another important aspect is code quality specifically related to scaling. Scaling is difficult to achieve reliably if there is any externally dependent blocking service executed as part of the hit. Examples include sending email directly to a recipient or a external service, sending information from the server to an external service. All such processing should be done with some form of a queue handled either by an different process such as a cron job. Until Magento 1.9.2.4, the default email sending was inline for example, slowing the order success page being shown.

Autoscaling adds and removes servers (and hence resources) – something traffic managers cannot do with highways. This gives website scaling an advantage to be more elastic.