Security

Security is key to any production website. We believe you cannot just add security from outside to your server. Security is achieved with a set of policies and technology.

Review our blog articles on security.

Content Security Policy in Magento : inline scripts and nonce

Introduction to Content Security Policy

Content Security Policy (CSP) is a HTTP header that promises to enhance security and prevent cartjack type of attacks or atleast make them difficult.

A popular attack vector is for malicious javascript code to be added to your webpage to skim credit card or other user data as a visitor enters the data. The malicious code is added by either attacking a vulnerability in the website, adding javascript to a header, or by attacking a 3rd party service, compromising a javascript that is part of the page.

The CSP header identifies the domains from which javascripts are loaded or provide signatures of legitimate scripts. It is a browser feature to not load scripts from other than whitelisted domains or when signatures do not match.

Inline javascripts

It is quite common to run javascripts inline - using a <script> html tag.
If a website is compromised - say the HTTP head that gets injected to the page from the database is compromised, critical user data including credit cards can be leaked.

It is essential to not only protect the site from getting injected with 3rd party js links, but also from being injected with javascript.

CSP's script-src can take a 'nonce-<id>' value that will only load javascript that are marked with this nonce attribute with the same value in the script tag.

Varnish caching and the nonce challenge

nonce stands for "number only once". By definition it should be unique value per page load. If Magento were to generate nonce values, it would not be possible these in varnish as a full page cache.

It is obvious that in order to generate the nonce value, it should be done on the "edge" - either in varnish as it returns a body, or in nginx which may be at the edge before varnish for TLS termination.

Using nginx subfilter command and a header filter javascript (using the njs module), luroConnect replaces a placeholder nonce value in Magento with a valid nonce value.

Securing the placeholder

The placeholder has to be a secret between Magento and the nginx that replaces the placeholder. Our nginx changes expect Magento to send this in a response header "nonce_unique" which is used for substitution.

Note : Writing a search and replace before Magento sends the http body is NOT considered secure. Recent COMICSTRING compromised websites may insert a script in the header and get legitmized with this code.

/etc/nginx/lc/njs/csp_nonce.js :

function csp_nonce_header_replace(r) {
    var unique_csp_nonce_placeholder = r.headersOut['nonce_unique'];
    if(!unique_csp_nonce_placeholder) return;
    var csp = r.headersOut['Content-Security-Policy'];
    if(!csp) return;
    csp = csp.replaceAll(unique_csp_nonce_placeholder, r.variables.ssl_session_id);
    r.headersOut['Content-Security-Policy'] = csp;
    delete r.headersOut['nonce_unique'];
};  
      
export default csp_nonce_header_replace;

in the nginx configuration that terminates TLS and proxies to varnish

    location / {
      ## replace the placeholder in the CSP header
      js_header_filter csp_nonce_header_replace;

      ## replace the placeholder in the body
      sub_filter_once off;
      sub_filter_types *;
      sub_filter $upstream_http_nonce_unique $ssl_session_id;

      ## Proxy to varnish
      proxy_pass  http://M2_lbbackend;
      proxy_buffer_size 128k;
      proxy_buffers 4 256k;
      proxy_busy_buffers_size 256k;
      proxy_set_header X-Real-IP  $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-Port 443;
      proxy_set_header Host $host;

      ## in order for nonce replacement to work, varnish cannot use gzip
      proxy_set_header Accept-Encoding 'identity;q=0';
    }

in nginx.conf

  • load the js plugin
load_module modules/ngx_http_js_module.so;
  • in http section add
http{
    ... 

    ## use njs to add servertime header
    js_path       /etc/nginx/lc/njs/;
    js_import csp_nonce_header_replace from csp_nonce.js;

    ...
}

Would you like to switch to a modern hosting platform?

Schedule a call of a free evaluation!

With features like ~0 downtime code deploy and autoscale to reduce your hosting costs, luroConnect offers you unparalleled hosting environment for Magento.

Schedule a call and we will show you how we can

  • Improve your hosting, possibly with autoscale
  • Have a managed dev, staging and production environment
  • Server performance measured every minute with alerts for a slowdown
  • A multi point health check every day
  • Optimized hosting costs

What should (and should not be) in a robots.txt file?

What is robots.txt?

robots.txt is a file, accesible as a URL https://<domain>/robots.txt that tells crawlers and BOTs what URLS crawlers can access on the domain / website.

However, there is a lot of confusion among internet marketers on its value and power.

The format of the file is very simple. Instructions to allow or disallow paths per user agent are grouped together. The user agent itself is a shortened version of the user agent header and each BOT publishes its own shortcode. Google for exanple has a top level Googlebot but has many others for images, ads, etc.

In addition a global setting in robots.txt is sitemap. This directive tells all crawlers where the sitemap file is. Most commonly it is an xml file.

Each crawlers is free to interpret the robot.txt, including if it should honor the directives.

Do you use Disallow in robots.txt

SEO experts and internet marketers would like to think that by adding a Disallow of an existing URL will cause google to not index the page.

That is not true – google states this in many documentations. At one place it puts up a warning

“Warning: Don't use a robots.txt file as a means to hide your web pages from Google search results.

If other pages point to your page with descriptive text, Google could still index the URL without visiting the page. If you want to block your page from search results, use another method such as password protection or noindex."

We actually see internet marketers in the Magento space browse the internet and get a suggested robots.txt with perhaps the longest content they can find, thinking the more Disallows they have the better google will crawl.

Security Issues with bad disallows

We see commented sections saying “#Directories”.

Little do they know they are exposing the internal directory structure of the application with this. And since robots.txt is available to everyone including bad actors, you have just created a security issue.

We also see some marketers give a huge list of crawlers they don’t want on their website. This is completely unnecessary. Tell your hosting provider. If you are with luroConnect, all our plans include “BOT Blockers”. Just let us know and we will match a real HTTP User Agent header and block it.

What would be the best way to structure a robots.txt?

There is no significance of allow!

User-Agent : *
Disallow : /<path visible to user you don’t want crawler to go like product compare>
User-Agent : bingbot
Crawl-delay : 1
Sitemap : <sitemapurl>

Note though Google suggests using noindex on the pages you put in disallow.

Another interesting fact is google documents that it caches robots.txt for 24 hours. Essentially meaning that cache control headers are ignored by google. However, a browser does cache robots.txt, so when checking the current robots.txt of a website that google will see, please disable your browser cache.

So, why a note on robots.txt on a managed hosting blog?

We are a hosting platform that understands the importance of SEO and have acted on it.
We know and keep track of robots.txt, sitemaps and feeds for all our customers.

We also feature a very strong BOT Blocker - regular expression match any user agent string, identify googlebot imposters and block them, basic authentication password protected staging and dev sites to ensure they don't get crawled.

If you are SEO knowledgeable and do not find the recommendation acceptable, we encourage you put your comments in this blog. We will acknowledge and publish all updates we accept.

Sansec reports new Magento 1 hack

Over the weekend of Sept 11, 2020, Sansec reported a web skimming attack on Magento 1 stores. It was the largest single day automated attach recorded by Sansec.

What are skimming attacks?

"Skimming" attacks are malicious code added to your website so when a site visitor is entering any personal information including credit card, the content is "skimmed" and sent to the attacker.

The website looks completely normal.

In September 2018, British Airways revealed that 380,000 passenger information had been skimmed from the website. The modus operandi for this attack was access to the code (possibly the version control of a 3rd party javascript module used on the website). The attack went undetected for months.

Some attacks are also called "Magecart" attacks.

In Magento we see 2 popular ways

  • Break the admin password and upload content to "Miscellaneous Header" or "Miscellaneous Footer" sections.
  • Upload a php code file which in turn loads the real malicious code in javascript in the page - either directly into the page or modify a known javascript file.

The current attack is of the second variety.

Stores on luroConnect were not attacked!

The attack as described by Sansec in the article used the Magento connect to bypass Magento admin and upload malicious code in javascript files to facilitate skimming of credit card information.

luroConnect has many rules that helped prevent this attack from affecting any of our Magento 1 websites.

Rule 1 : "/downloader" URL  is not accessible on any live or staging website. We expect code to be deployed through git and expect the developer to use a manual process to install modules. We disallow magento connect based installation in any of our managed websites.

Rule 2 : Our web directory owner and hosting users are different. Hosting user is the user php code runs as. Moreover, /skin folder is not writable by the hosting user.

Rule 3 : We use a static minifier and deploy the code to a folder skin.min which is not in git. The /skin folder itself is never used.

Rule 4 : Staging and dev environments are protected using a HTTP Basic Authentication. Automated attack vectors would need to add a password guesser before they can reach the staging URLs. This is assuming a developer would have relaxed permissions in the dev environment.

Rule 5 : Our platform bars ssh access to the hosting user. This prevents any accidental change in permissions being permanent. Even in the rare case ssh access is given (for debugging purposes), upon relinquishing the access, we sanitize the environment with default permissions.

How to protect your store?

One of the best ways is to sign up for Sansec's security scanner eComscan

luroConnect is a very secure platform for Magento hosting. We call it layered security - from a secure file system and strict folder permissions, to an inbuilt WAF with configurable rules to partnering with Sansec for security scans.

We host you on your cloud or physical hardware using our stack. Learn more about our plans here.

Magento 1 end of life

June 2020 is a few weeks away and many stores are not ready to move away from Magento 1. But wait, you don't have to update - now or in the long run.

COVID-19 Update : We find many customers have had to delay their Magento 2 launch in these uncertain times. We also know many of them did not have plans to keep Magento 1 uptodate. In fact, we know agencies that have stopped support for Magento 1.

Starting at a low cost of $200 per month with no long term contract. It includes reviewing your current hosting for security, moving your website to the latest Magento 1.9 and latest php supported as well as adding additional security measures to your website. It also includes help signing you up for Mage One or Open Mage projects for support beyond Magento 1 EOL, if required.

Signup now (no credit card required) and we will be in touch with you.

What does end-of-life for Magento 1 mean?

Magento 1 End-of-Life does not mean your website will stop working. It means Adobe will stop giving fixes for Magento 1, even security patches. As php version in use goes out-of-life, no upgrades will be given by Adobe.

However, being an open source platform, your Magento 1 website will not stop working. The code and license do not restrict you from running the website.

Stay on Magento 1 for short or even long term

That is a valid option and many customers are choosing this. Makes sense if

  • You have a lot of investment in the customizations which may be difficult to replicate anywhere
  • You have a stable money generating store and any change looks like a risk
  • Are in the process of migration, but the migration may take some time

What are the options to stay on Magento 1?

  • Use paid support plan from Mage one (https://mage-one.com).
    luroConnect is a partner and we will apply the patches for you as they are released.
  • Use open source Magento 1 fork (https://github.com/OpenMage/magento-lts) with support from the community.

What are the risks?

  • Support from either of the above reduces over time as many websites move out of Magento 1
  • Developer support may reduce as most developers move to Magento 2
  • Plugin vendors have already stopped support or are stopping support.

Magento 1 and PCI

Many merchants received email from or read their post and advice move to another platform after June 2020.

It refers to PCI / DSS Requirement 6 - excerpted here with highlighting for relevance of discussion.

Your Magento 1 store software has many vendors, including Adobe / Magento for the core, but also plugin vendors. Since it is open source, you are free to modify the core and take the responsibility and other requirements may apply.

By switching to an alternative vendor for the Magento 1 core - such as Mage-One or Open Mage, in our non-legal opinion, you are not on Magento 1 any more and do not have Adobe as your vendor. If a plugin vendor does not give security patches to your Magento 1 plugin any more, it is important to take over the plugin code responsibility as a separate contract.

PCI does not have a vendor approval process. However, your vendor may need to justify satisfying some other requirements for safe and secure coding practices.

However, by not recognizing your core application, you may need to talk to PayPal as a merchant to get PCI approval. This would include scans.

luroConnect Support for Magento 1 past EOL

We have built add-on package for Magento 1 EOL support. We have a 4-point plan to support you.

From EUR 29

Mage-One Patches for post EOL Magento 1 support. Will satisfy PCI vendor requirement.

mage-one logo

From EUR 99

Sansec eComscan examines a store for malware, vulnerabilities and unauthorized accounts. Written by well known security expert Willien De Groot, it scans files, databases and 3rd party components of Magento.

external

From USD 50*

Inbuilt into our Nginx, with M1 rules, protects from OWASP Top 10, with the ability of virtual patching.

From USD 50*

Staging environment to ensure patches are tested before taken live.

* This is in addition to our fees for WAF and staging environments if not included in your support plan. Customer pays for hosting costs of staging server.

How we protect you

  1. File system security to prevent 0-day or new unknown vulnerabilities. Strict file and folder permissions prevent uploads to folders that execute code
  2. Support for Magento 1 Nginx rules not allowing execution of php from skin and  js or php from media. This rule will prevent many malicious code to fail as they depend the ability to upload malicious code and execute.
  3. WAF - Web Application Firewall - with strict Magento 1 rules. This prevents SQL Injection and cross site scripting related attacks from being allowed.
  4. Virtual patching - block URLs that are known to have vulnerabilities. For example, we do not allow saving of the "miscellaneous" header and footer section from being written from the admin login.
  5. Partnership with mage-one to get the latest patches and keep your site uptodate.
  6. Admin login protection via dual password. The first is a basic http challenge. This reduces password guess of the admin URL as 2 passwords have to be guessed.
  7. Password guess prevention by restricting how many failed attempts are allowed in a day from the same IP - implemented at the application server level without changing Magento code.
  8. Staging environment to test patches from open mage or mage one or any other source you may have. Also support php version upgrade first on staging before upgrading production.
  9. Protect source code by using secure deploy process
  10. Secure backup With a proven restore strategy
  11. Support for our secure deploy process that ensures 0 downtime during code deploy and not have git folder in the hosting folder. An ability to rollback by switching to any previous deployed version is an added advantage.
  12. System components upgrade - including php. As versions of php approach their security end-of-life and support for higher versions appear in patches, php version will also be upgraded.
  13. Partnership with Sansec for their eCommscan scanning product.

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Hosting help moving to Magento 2

When moving to Magento 2, to reduce the downtime during the move, luroConnect has plans for you.

  • Staging server support plans.
  • Magento 2 transition plan with minimum downtime. Our care even includes URL rewrite rules to ensure SEO value is not lost during transition.

Meet Magento India 2020 – panel discussion

I was on a panel at Meet Magento India 2020 #MM20IN moderated by Brent Peterson of Wagento.

It was an honor to be part of a elite panel - Miguel Ignacio Balparda of Nexess / Liquid Web, Shehzad Karkhanawala of Webscale Networks, Arun of ServerGuy. The discussion was lively and well received.

While the time was too short to address all the questions, Brent was kind enough to circulate the questions ahead of time.

Here would be my responses if there was enough time!

  1. What are some of the latest trends you are seeing in the hosting world
    Integrations into application – Magento Cloud as well as pwa vendors are giving hosting included packages.
    Plugins or apps are going SaaS.
    Underlying tech with Kubernetes is changing the hosting world.
    Magento hosting has to look like SaaS to the merchant – with the flexibility.
  2. Hosting as we knew it five years ago has now become a commodity. There are some interesting approaches to hosting in the market today. In your opinion, what defines and differentiates a great hosting provider for e-commerce?
    If you look at progression of economic value – commodities, goods, services, experience. A quick example being you could use ingredients to make a cake (commodities), you could buy a mix to bake a cake (goods), you could buy a cake from a cake shop (services) or you could have your birthday party organized of which cake is a part (experience). Hosting has reached the experience level with SaaS. We have to make that happen with Magento hosting.
  3. You all provide managed hosting services. What is different about managed as compared to regular hosting?
    Managing is offering hosting at a higher level of abstraction.
    Traditionally companies like Rackspace provided a server or a managed server and charged something like $100 per server as managed or support fee. In reality you paid for a system admin.
    When you give a full application level support, you are abstracting away everything.
    While many managed hosting services talk about 15 minute response time to tickets, we measure ourselves differently – we want no tickets. If a customer has to call and tell us their site is slow or god forbid down, we have lost.
    A managed hosting provider sits on the merchant’s side between the developer and the hosting service.On one hand, we ensure the hosting bill is optimized.
    On the other, we ensure we host a 12-factor app – keeping the developers honest about the quality of their delivery. We partner with many developers.
  4. What is the relationship between the customers and the hosting company in a managed service agreement?
    I will give you an analogy – a few years ago I built a house in Bangalore. I hired an architect and a contractor. I made the contractor report to the architect and paid the architect an additional 6.5% as fee to manage the contractor who managed the workers and material. 
    We are like the architect in this example - On the side of the merchant, making things happen on the hosting.
  5. What’s the difference between public and private cloud?
    everything and nothing!
    Private cloud has power, uptime, connectivity and flexibility issues.
    If the private data center addresses some of these issues, there is no difference from managed hosting perspective.
  6. All around the globe, there are big events like Cyber Monday and Black Friday. How do you help your customers prepare for this type of event?  Auto-scaling
    Autoscaling is an obvious, but there is more to the preparation.
    Autoscaling has a latency in scaling.
    We like to understand marketing strategy – if they are sending SMS – which generates a far higher peak than almost any other marketing activity – what time and volume – so instances can be launched We encourage staggering ofcourse but it is difficult to make marketing understand that. We would prep autoscale to have more instances.
    Reduce as many moving parts as possible.
    We ensure code freeze is well before the event.
    Avoid : Product uploads, inventory sync if possible
    Indexing state
    Cache state – ensure all caches are pre-loaded / primed
  7. What are the biggest differences between a hosting environment for Magento 1 and Magento 2?
    • Varnish.
    • core performance (or performance on a miss) is tough to get in M2.
    • Rabbitmq – but we have not seen mature use of this technology as yet.
    • Build & Release process.
  8. What’s your plan for the Magento 1 EOL in June?
    We are ready to support the hosting. Security is a key concern in the longer run, but we have to live with that – react as and when we know what happens. We have merchants that have not changed their code for years. Yet on old M1, php 5.6, etc. which we will not support.
    Others have invested a lot in M1 to move with feature parity. Features on the admin panel that their packaging staff depend on or rules to decide how to ship based on warehouses.
    (During the panel discussion participants talked about hardened php. That applies to php 5.x which means the merchants are running obsolete versions for a while. I think they should at least upgrade to latest Magento and php 7.2 )
  9. Some merchants are just not ready to move to M2 due to cost and timeline pressures.  Is there anything developers and their hosting provider partners can do to help clients continue to operate after June 2020?
    Absolutely. The numbers are so large that I feel a community effort to continue, if it happens, will be much appreciated. I think the hosting community is ready to support – many of us yet support php 5.6 for example as merchants have not recognized the need to move or have unsupported plugins that have not upgraded.
  10. Security has become the most critical piece of an e-commerce merchant’s infrastructure strategy. Cyber-attacks are growing in number, intensity, and sophistication. What can a hosting provider do to protect storefronts from threats?The biggest threat is from 0-day threats and not patching ontime.
    0-day threats are hitherto undocumented or unaddressed threat.
    In medicine, when a virus hits, there are 2 strategies – handling the virus directly coming up with vaccines, etc. The other is health related advice – such as wash your hands. Crucial things that sound simple, but are instrumental in preventing someone from being infected across multiple types of viruses.I call these fundamental truths. They address a class of problems.I was an engineer when first viruses came to infect 286 computers. My employer then decided to develop expertise in virus combat. We came up with a simple set of rules that prevented many forms of viruses then. How you booted mattered then for example. What you ran on your computer mattered. I wrote some code that my customer sold to Compaq and ended up protecting millions of computers.We try to make such fundamental truths in hosting. Some are taken into standards like cross site scripting. Brent – when you became a Magento Master a few years ago, you mentioned 777 permissions as a challenge. Magento came up with what they called 2 user hosting – a web server user and a command line user. Others we have developed.We try to take a class of attacks and eliminate them.Ofcourse when a known threat exists, we try to ensure that threat is addressed directly.
  11. What’s your take on the current state of Magento Cloud?
    We have typically  3rd party opinion but from what limited 1st hand experience we have, we find that to be having many restrictions. Autoscale, tuning caches like adding more RAM to varnish, are difficult. But, Magento Cloud is there and will improve. Adobe got SaaS wrong the first time, but persevered and fixed it to be one of the most profitable companies of our time.
    I would really hope Magento stops seeing Open Source as competition. The two depend on each other.

Analysis of recent website security breaches

Introduction

Recent high profile website security breaches are nice to study to ensure you have defense mechanisms in place.
We strongly believe in "defense in depth" or "multi layered security". This includes

  • Defense in Multiple Places. Meaning that an organization need to deploy security mechanisms in different places.
  • Layered Defenses. This means to have many layers of defense.

Let us analyze these hacks and lessons we can learn.

Analysis of 4 hacks to known websites

britishairways.com does not even store credit cards!

British Airways eCommerce site britishairways.com reported leak of 380,000 credit cards.

Let us first look at their privacy policy

Under “How we secure your payment information when you book online” it mentions

  • All of your personal information is encrypted as it travels over the Internet, to and from www.ba.com. When information is encrypted, it is scrambled between your computer and our server. The information is only unscrambled when it safely reaches us. It's fast and safe, and it ensures that your personal information cannot be read by anyone else.
  • When you send your personal details to us, none of the information is stored on the website, it is passed straight back to our secure servers at our Heathrow headquarters, where it only exists as part of the record of your transaction.

Factually, what does “none of the information is stored on the website” mean?

Also based on the clarification given by BA we can assume they do not store credit card information along with the customer data.

The breach :

  • Magecart created a secure fake site baways.com which was used to store the siphoned credit card information
  • As is typical of magecart they broke into the britishairways.com servers, possibly through an admin interface to change a javascript file (modernizr.js), an open source library used on the website. Magecart inserted a short javascript that would monitor keystrokes and report back to baways.com that they owned.

That clearly indicates there was a breach. However, if one assumes the main servers of britishairways.com are well protected, there can be 2 possible options of breakin

  • It is possible in the website to use a administrative interface to change or upload a javascript file that gets used in the build
  • Or, they were able to gain access to the build process of britishairways website and ensure a modified version of the required javascript file was inserted.

What lessons can we learn?

  1. Admin interface to either update HTML or javascript code directly is a bad idea. If your environment has that convenience, it is important to protect it. Protection can be IP based so only certain IPs can access or be hidden in general and unlocked if needed.
    It is also important to periodically check if HTML or JavaScript stored in the database is clean. For example do not allow any JavaScript tag in the product description.

Live site should be hosted to ensure directories from where any code is executed is not allowed to be written to.

The breach :

Due to a vulnerability in “Apache Struts” that was exploited, access to execute any shell script was given and the hacker was able to download the entire credit history data stored.

Exact vulnerability in Apache struts – “incorrect exception handling and error-message generation during file-upload attempts, which allows remote attackers to execute arbitrary commands via a crafted Content-Type, Content-Disposition, or Content-Length HTTP header”

Patch was available but was not implemented on the server

What lessons can we learn?

  • While the obvious lesson is to update when patches are available, we think that it is not always possible or there will atleast be a lag between the patch being released and the website being patched.
  • A web user – the user that the web application is running as – should have a need to know access permissions.
  • Web root should be clean – do not have temporary folders or files that are not required – or limit permissions of the web user.
  • Monitor network activity for any unusual pattern – download of 143m records should not go unnoticed.
  • Use reverse proxy as we web server (like nginx) and do not expose the actual application servers to the internet. Infact, ensure the web server is a separate container or instance that only serves static content and passes upstream requests to the application server.

The breach :

As per facebook ...

The vulnerability was the result of a complex interaction of three distinct software bugs and it impacted “View As,” a feature that lets people see what their own profile looks like to someone else. It allowed attackers to steal Facebook access tokens, which they could then use to take over people’s accounts. Access tokens are the equivalent of digital keys that keep people logged in to Facebook so they don’t need to re-enter their password every time they use the app.

First, the attackers already controlled a set of accounts, which were connected to Facebook friends. They used an automated technique to move from account to account so they could steal the access tokens of those friends, and for friends of those friends, and so on, totaling about 400,000 people. In the process, however, this technique automatically loaded those accounts’ Facebook profiles, mirroring what these 400,000 people would have seen when looking at their own profiles. That includes posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations. Message content was not available to the attackers, with one exception…

The attackers used a portion of these 400,000 people’s lists of friends to steal access tokens for about 30 million people

What lessons can we learn?

  • Do not generic and hidden tokens that give cross access to APIs. Instead, each token should have a ACL (Access Control List) and select the minimum ACL required.
  • While this is particularly true for web services, in eCommerce websites payment gateway tokens should not be sent to frontend until required, even in hidden fields.

The breach :

  • On the evening of Saturday, June 23rd (2018), we received notice from our customers at Ticketmaster that the personal data of their users had been compromised. Upon further investigation by both parties, it was confirmed that the source of the data breach was a single piece of JavaScript code, customized by Inbenta to serve Ticketmaster’s particular requests. The attacker(s) located, modified, and used this customized script to extract the payment information of Ticketmaster customers processed between February and June 2018.
  • After a careful analysis of all clues and snapshots from our systems, the technical team at Inbenta discovered that the script had been implemented on the payment page. We were unaware of this, and would have advised against doing so had we known, as it presents a point of vulnerability.
  • Inbenta has conducted a detailed analysis of all the file systems used for development and production systems, thoroughly analysing any difference between the original source code and the version in the production environment. We can confirm that just 3 files were altered that affected 3 specific websites for Ticketmaster.
  • One of the advantages of hosting scripts at Inbenta’s servers that are embedded in our customer’s website is the flexibility that Inbenta can offer to our customers to have new functionalities or updates up and running quickly. The downside is that we cannot monitor which web pages our customers are embedding those scripts on and therefore we cannot prevent customers putting them in pages that collect sensitive information.

What lessons can we learn?

  • Avoid including 3rd party libraries from 3rd party servers. Implementing this is a matter of size sometimes – if the vendor is bigger and changes the content often, this cannot be done. However, there are many instances that can be avoided.
  • Ensure version in original source code and that in the production environment are the same, else there has been a breach.
  • Production site should be hosted to ensure directories from where any code is executed is not allowed to be written to.
  • Secure the CI/CD process so files in version control are not modified on the way. Some need to be compiled / compressed, etc but the possibility of inserting code has to be prevented. Securing CI/CD process is important.

Conclusion : Do not take Security for granted

Just having a SSL or a WAF is not enough security. Even having PCI certification is not enough security.

Instead, building security in mulitple layers, following first principles of security such as "need to access" and monitoring are needed. Since we cannot assume there is full proof security, we have to actively both prevent and know early if there is a breach.

To secure your Magento website, refer our article "Enterprise Grade Security For Your Magento Website"

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

The DIY Guide to Magento Hosting

Introduction

Quite often we depend on the hosting provider to give us a fast website. But with the availability of affordable hosting options in terms of VPC that requires some DIY skills to setup.

In this DIY guide we help you setup a high performance Magento hosting environment. This guide includes configurations and tips. As with any DIY, you need to have some knowledge and tools. We expect that you have a basic working knowledge of linux and its commands, access to root, ability to install standard packages  as well use of a text editor in linux – such as vim. An idea of users, uids, groups, gids and process will be useful for trouble shooting if you need.

Much like getting a car with a good engine is only the starting point if you love fast cars, so is a good server if you want great page load times. A good transmission would be the next thing you need to look for. Transmission in hosting is equivalent to the components of hosting and how they communicate.

Choosing a hosting provider

All hosting providers claim best hardware but sell you on price. How do you compare one from the other? Knowing how they work and some tests may help!

How clouds and virtualization work : Todays virtualization technology allows hosting providers to share a larger computing resource they own amongst many customers. The technology also allows overcommit – much like an airline that would overbook seats in the hope someone cancels, a hosting provider can overcommit resources such as CPU in the hope that not all customers would use all CPUs given at the same time. However, unlike an airline, virtualization technology can deteriorate performance without failing.

Testing the resources : We have a quick way to test the quality of your hardware. We measure the speed of RAM and disk in these simple tests. In each case we write data serially to a chunk of memory or disk and measure the performance. Warning : Free memory and disk are required for the test.

The memory test

(tempDir=`mktemp -d -t linux-benchmark-XXX`; mount -t tmpfs -o size=128m $tempDir $tempDir; (dd if=/dev/zero of=$tempDir/test_ bs=1M count=128 conv=fdatasync &&rm -f $tempDir/test_)

The disk test
This test gives you a momentary result. Run the test multiple times to get a good average. Anything below 150Mbps is slow. A typically SSD disk should give about 450Mbps, good ones above 700Mbps.

tempFile=<drive>/disktest; dd if=/dev/zero of=$tempFile bs=1M count=1024 conv=sync oflag=direct; rm -f $tempFile

The architecture

We will assume a simple architecture – all services on the same server. This is for simplicity and we have seen it to work well with sites on physical hardware with rather consistent traffic in a 24 hour period.

There can be alternate architectures

  • separate db server
  • multiple app servers – service all traffic in front of a load balancer
  • separate app server – say for admin URLs with a path based load balancer

Each of these bring additional complexities in configurations. Please use the comments section below if you have a more complex architecture.

The nginx web server

Note : We use nginx – read our blog article on why we prefer nginx over apache.
The basic nginx configuration for Magento is available
Click here.

## define both backends
## upstream  socketbackend{
  server unix:/var/run/php-fcgi-www.sock;
}
upstream tcpbackend{
  server 127.0.0.1:9000;
}
server {
  listen 443 ssl http2;
  server_name example.com www.example.com;
  root /home/example/www;
  ssl on;
  ssl_certificate       /etc/pki/tls/certs/example_certs/www_example.com.crt;
  ssl_certificate_key   /etc/pki/tls/certs/example_certs/www_example_com.key;
  ssl_protocols         TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";  
  ssl_dhparam /etc/nginx/dhparams.pem;

  access_log /var/log/nginx/example.access.log;
  error_log /var/log/nginx/example.error.log;
  #/* deny should come first */
  location ^~ /app/ { deny all; }
  location ^~ /bin/ { deny all; }
  location ^~ /lib/ { deny all; }
  location ^~ /media/downloadable/ { deny all; }
  location ^~ /dev/ { deny all; }
  location ^~ /vendor/ { deny all; }
  location ^~ /update/ { deny all; }
  location ^~ /var/ { deny all; }
  location ^~ /downloader/ { deny all; }
  location ^~ /admin/ { deny all; }
  location ^~ /phpserver/ { deny all; }
  location ^~ /setup/ { deny all; }
  location ^~ /run/ { deny all; }
  #/* system . files */
  location /. {
    return 404;
  }
  #/* main php handler */
  location / {
    index index.php index.html;
    try_files $uri $uri/ @handler;
  }
  #/* set long expiry for static content. Update types as needed */
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|swf|woff2|svg|TTF)$ {
    access_log off;
    log_not_found off;
    expires 360d;
  }
  #/* directories where you do not want php to be executed from */
  location ~* /(images|cache|media|skin|js|uploads|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
    return 403;
  }

  #/* we you need setup remove from deny list */
  location /setup {
    try_files $uri $uri/ @setuphandler;
  }
  # Rewrite Setup's Internal Requests
    location @setuphandler {
    rewrite /setup /setup/index.php;
  }
  # Rewrite Internal Requests
  location @handler {
    rewrite / /index.php;
  }
  location /pub/static {
    try_files $uri $uri/ @static;
  }
  location @static {
    rewrite ^/pub/static/(.*)$ /pub/static.php?resource=$1? last;
  }
  location ~ \.php$ { ## Execute PHP scripts
    try_files $uri =404;
    expires off;
    fastcgi_pass socketbackend;
    #fastcgi_pass tcpbackend;
    fastcgi_read_timeout 1s;
    fastcgi_param HTTPS $fastcgi_https;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
  }

Php-fpm – the php interpreter

An important factor is to not overcrowd the CPU and memory you may have, as the default configuration does. The main configuration file is www.conf and specifically the section on servers.
Click here is a production php-fpm.d/www.conf file

; Start a new pool named 'www'.
[www]

; Unix user/group of processes
; luroConnect : 2-user system requires the php-fpm and nginx processes run as the same user
; give the nginx group read only access to the hosting directory typically in /home//www
; if a directory needs to be written into by php (such as upload, consider media, var) those directories
; need write permission for groups
user = nginx
group = nginx

; The address on which to accept FastCGI requests.
; Valid syntaxes are:
;   'ip.add.re.ss:port'    - to listen on a TCP socket to a specific IPv4 address on
;                            a specific port;
;   '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
;                            a specific port;
;   'port'                 - to listen on a TCP socket to all addresses
;                            (IPv6 and IPv4-mapped) on a specific port;
;   '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
; luroConnect : Match listen to the value in nginx. Single server system can use socket
; listen = 9000
listen /var/run/php-fcgi-www.sock

; Set listen(2) backlog.
; Default Value: 511 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 511

; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions.
; Default Values: user and group are set as the running user
;                 mode is set to 0660
;listen.owner = nobody
;listen.group = nobody
;listen.mode = 0660
; When POSIX Access Control Lists are supported you can set them using
; these options, value is a comma separated list of user/group names.
; When set, listen.owner and listen.group are ignored
;listen.acl_users =
;listen.acl_groups =

; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
; 
;listen.allowed_clients = 127.0.0.1

; Specify the nice(2) priority to apply to the pool processes (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
;       - The pool processes will inherit the master process priority
;         unless it specified otherwise
; Default Value: no set
; process.priority = -19

; Choose how the process manager will control the number of child processes.
; Possible Values:
;   static  - a fixed number (pm.max_children) of child processes;
;   dynamic - the number of child processes are set dynamically based on the
;             following directives. With this process management, there will be
;             always at least 1 children.
;             pm.max_children      - the maximum number of children that can
;                                    be alive at the same time.
;             pm.start_servers     - the number of children created on startup.
;             pm.min_spare_servers - the minimum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is less than this
;                                    number then some children will be created.
;             pm.max_spare_servers - the maximum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is greater than this
;                                    number then some children will be killed.
;  ondemand - no children are created at startup. Children will be forked when
;             new requests will connect. The following parameter are used:
;             pm.max_children           - the maximum number of children that
;                                         can be alive at the same time.
;             pm.process_idle_timeout   - The number of seconds after which
;                                         an idle process will be killed.
; Note: This value is mandatory.

; luroConenct : We set to static so we can dedicate CPU resources to php
; Create separate pools for processing URLs with different levels of CPU utilization
; For example if you have URLs that do a CURL,  make a curl pool. These will consume less CPU.
pm = static

; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
; luroConnect : we set it to slightly more than the cores available for php.
; 5 is ideal for a 4 cores reserved for php.
pm.max_children = 5

; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 5

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 5

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 35

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;

; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
; luroConnect : Magento and wordpress need a limit as inadvertently there are memory leaks
; set this number based on hits and the avg process size that is available through the status page
pm.max_requests = 500

; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. It shows the following informations:
;   pool                 - the name of the pool;
;   process manager      - static, dynamic or ondemand;
;   start time           - the date and time FPM has started;
;   start since          - number of seconds since FPM has started;
;   accepted conn        - the number of request accepted by the pool;
;   listen queue         - the number of request in the queue of pending
;                          connections (see backlog in listen(2));
;   max listen queue     - the maximum number of requests in the queue
;                          of pending connections since FPM has started;
;   listen queue len     - the size of the socket queue of pending connections;
;   idle processes       - the number of idle processes;
;   active processes     - the number of active processes;
;   total processes      - the number of idle + active processes;
;   max active processes - the maximum number of active processes since FPM
;                          has started;
;   max children reached - number of times, the process limit has been reached,
;                          when pm tries to start more children (works only for
;                          pm 'dynamic' and 'ondemand');
; Value are updated in real time.
; Example output:
;   pool:                 www
;   process manager:      static
;   start time:           01/Jul/2011:17:53:49 +0200
;   start since:          62636
;   accepted conn:        190460
;   listen queue:         0
;   max listen queue:     1
;   listen queue len:     42
;   idle processes:       4
;   active processes:     11
;   total processes:      15
;   max active processes: 12
;   max children reached: 0
;
; By default the status page output is formatted as text/plain. Passing either
; 'html', 'xml' or 'json' in the query string will return the corresponding
; output syntax. Example:
;   http://www.foo.bar/status
;   http://www.foo.bar/status?json
;   http://www.foo.bar/status?html
;   http://www.foo.bar/status?xml
;
; By default the status page only outputs short status. Passing 'full' in the
; query string will also return status for each pool process.
; Example:
;   http://www.foo.bar/status?full
;   http://www.foo.bar/status?json&full
;   http://www.foo.bar/status?html&full
;   http://www.foo.bar/status?xml&full
; The Full status returns for each process:
;   pid                  - the PID of the process;
;   state                - the state of the process (Idle, Running, ...);
;   start time           - the date and time the process has started;
;   start since          - the number of seconds since the process has started;
;   requests             - the number of requests the process has served;
;   request duration     - the duration in µs of the requests;
;   request method       - the request method (GET, POST, ...);
;   request URI          - the request URI with the query string;
;   content length       - the content length of the request (only with POST);
;   user                 - the user (PHP_AUTH_USER) (or '-' if not set);
;   script               - the main script called (or '-' if not set);
;   last request cpu     - the %cpu the last request consumed
;                          it's always 0 if the process is not in Idle state
;                          because CPU calculation is done when the request
;                          processing has terminated;
;   last request memory  - the max amount of memory the last request consumed
;                          it's always 0 if the process is not in Idle state
;                          because memory calculation is done when the request
;                          processing has terminated;
; If the process is in Idle state, then informations are related to the
; last request the process has served. Otherwise informations are related to
; the current request being served.
; Example output:
;   ************************
;   pid:                  31330
;   state:                Running
;   start time:           01/Jul/2011:17:53:49 +0200
;   start since:          63087
;   requests:             12808
;   request duration:     1250261
;   request method:       GET
;   request URI:          /test_mem.php?N=10000
;   content length:       0
;   user:                 -
;   script:               /home/fat/web/docs/php/test_mem.php
;   last request cpu:     0.00
;   last request memory:  0
;
; Note: There is a real-time FPM status monitoring sample web page available
;       It's available in: @EXPANDED_DATADIR@/fpm/status.html
;
; Note: The value must start with a leading slash (/). The value can be
;       anything, but it may not be a good idea to use the .php extension or it
;       may conflict with a real PHP file.
; Default Value: not set
pm.status_path = /status

; The ping URI to call the monitoring page of FPM. If this value is not set, no
; URI will be recognized as a ping page. This could be used to test from outside
; that FPM is alive and responding, or to
; - create a graph of FPM availability (rrd or such);
; - remove a server from a group if it is not responding (load balancing);
; - trigger alerts for the operating team (24/7).
; Note: The value must start with a leading slash (/). The value can be
;       anything, but it may not be a good idea to use the .php extension or it
;       may conflict with a real PHP file.
; Default Value: not set
ping.path = /ping

; This directive may be used to customize the response of a ping request. The
; response is formatted as text/plain with a 200 response code.
; Default Value: pong
;ping.response = pong

; The access log file
; Default: not set
; luroConenct : access log is useful for measuring CPU utilization percent of a request
; lower percent indicates either high mysql or curl calls
; if no curl calls, check the mysql cache and tune
access.log = log/$pool.access.log

;access.format = %R - %u %t %m %r%Q%q %s %f %{mili}d %{kilo}M %{total}C%%
access.format = %R - %u %t %m %{REQUEST_URI}e %r%Q%q %s %f %{mili}d %{kilo}M %{total}C%%

; The access log format.
; The following syntax is allowed
;  %%: the '%' character
;  %C: %CPU used by the request
;      it can accept the following format:
;      - %{user}C for user CPU only
;      - %{system}C for system CPU only
;      - %{total}C  for user + system CPU (default)
;  %d: time taken to serve the request
;      it can accept the following format:
;      - %{seconds}d (default)
;      - %{miliseconds}d
;      - %{mili}d
;      - %{microseconds}d
;      - %{micro}d
;  %e: an environment variable (same as $_ENV or $_SERVER)
;      it must be associated with embraces to specify the name of the env
;      variable. Some exemples:
;      - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e
;      - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e
;  %f: script filename
;  %l: content-length of the request (for POST request only)
;  %m: request method
;  %M: peak of memory allocated by PHP
;      it can accept the following format:
;      - %{bytes}M (default)
;      - %{kilobytes}M
;      - %{kilo}M
;      - %{megabytes}M
;      - %{mega}M
;  %n: pool name
;  %o: output header
;      it must be associated with embraces to specify the name of the header:
;      - %{Content-Type}o
;      - %{X-Powered-By}o
;      - %{Transfert-Encoding}o
;      - ....
;  %p: PID of the child that serviced the request
;  %P: PID of the parent of the child that serviced the request
;  %q: the query string
;  %Q: the '?' character if query string exists
;  %r: the request URI (without the query string, see %q and %Q)
;  %R: remote IP address
;  %s: status (response code)
;  %t: server time the request was received
;      it can accept a strftime(3) format:
;      %d/%b/%Y:%H:%M:%S %z (default)
;      The strftime(3) format must be encapsuled in a %{}t tag
;      e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
;  %T: time the log has been written (the request has finished)
;      it can accept a strftime(3) format:
;      %d/%b/%Y:%H:%M:%S %z (default)
;      The strftime(3) format must be encapsuled in a %{}t tag
;      e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
;  %u: remote user
;
; Default: "%R - %u %t \"%m %r\" %s"
;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%"

; The log file for slow requests
; Default Value: not set
; Note: slowlog is mandatory if request_slowlog_timeout is set
slowlog = log/$pool-slow.log

; The timeout for serving a single request after which a PHP backtrace will be
; dumped to the 'slowlog' file. A value of '0s' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
; luroConnect : use for debugging slow access
; request_slowlog_timeout = 1

; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0

; Set open file descriptor rlimit.
; Default Value: system defined value
;rlimit_files = 1024

; Set max core size rlimit.
; Possible Values: 'unlimited' or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0

; Chroot to this directory at the start. This value must be defined as an
; absolute path. When this value is not set, chroot is not used.
; Note: chrooting is a great security feature and should be used whenever
;       possible. However, all PHP paths will be relative to the chroot
;       (error_log, sessions.save_path, ...).
; Default Value: not set
;chroot =

; Chdir to this directory at the start.
; Note: relative path can be used.
; Default Value: current directory or / when chroot
;chdir = /var/www

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environement, this can cause some delay in the page
; process time (several ms).
; Default Value: no
;catch_workers_output = yes

; Clear environment in FPM workers
; Prevents arbitrary environment variables from reaching FPM worker processes
; by clearing the environment in workers before env vars specified in this
; pool configuration are added.
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no

; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; exectute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
;security.limit_extensions = .php .php3 .php4 .php5 .php7

; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from
; the current environment.
; Default Value: clean env
;env[HOSTNAME] = $HOSTNAME
;env[PATH] = /usr/local/bin:/usr/bin:/bin
;env[TMP] = /tmp
;env[TMPDIR] = /tmp
;env[TEMP] = /tmp

; Additional php.ini defines, specific to this pool of workers. These settings
; overwrite the values previously defined in the php.ini. The directives are the
; same as the PHP SAPI:
;   php_value/php_flag             - you can set classic ini defines which can
;                                    be overwritten from PHP call 'ini_set'.
;   php_admin_value/php_admin_flag - these directives won't be overwritten by
;                                     PHP call 'ini_set'
; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no.

; Defining 'extension' will load the corresponding shared extension from
; extension_dir. Defining 'disable_functions' or 'disable_classes' will not
; overwrite previously defined php.ini values, but will append the new value
; instead.

; Default Value: nothing is defined by default except the values in php.ini and
;                specified at startup with the -d argument
;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com
;php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
;php_admin_value[memory_limit] = 128M

; Set session path to a directory owned by process user
php_value[session.save_handler] = files

; Note : Important to make /var/lib/php/session writable by nginx user
php_value[session.save_path]    = /var/lib/php/session
php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache

SSL Certificate

Getting a https certificate is now easy with letsEncrypt.

The nginx configuration enables http2, which is a newer version of the http protocol. Refer our blog article on advantage of http2 for Magento.

Mysql

There are many flavors available based on the original open source. We use percona mysql. Configurations may vary depend on the version but equivalent should be available.

Database performance depends on cache configuration based on available memory, a fast disk and your Magento indexing settings. Magento sites have a high percentage of reads to writes to the database – i.e. each product is browsed many times compared to the times you change them. As a result, mysql can perform really well as long as the cache hit rates are high. The following parameters determine the cache configuration.

query_cache_type = 1

; set query cache size to total RAM you can assign to query cache. Magento can use a lot of query cache
query_cache_size = 4G

; max size of an individual query that is cached. In Magento the category menu is often the largest query – ensure that is cached
query_cache_limit = 128M

; generally equal to the size of your database – ensures the database is stored in RAM
innodb_buffer_pool_size = 3G

The values depend on your website – such as number of categories displayed in the main menu or the length of the description of a product. The following queries help in determining the cache performance.

show status like 'qcache_hits'
show status like 'Qcache_inserts'
show status like 'Qcache_not_cached'
show status like 'Qcache_lowmem_prunes

Make special note of the parameter Qcache_lowmem_prunes that tells that a query was not cached as there was insufficient memory. This typically means an increase in query_cache_limit or query_cache_size may be required.

Redis Configuration for cache and session

Click here for sample configuration file

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis/redis.pid

# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
# luroConnect - 6379 for Magento cache, 6380 for session
port 6379

# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
#
bind 0.0.0.0

# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# Set server verbosity to 'debug'
# it can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis.log

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility.  Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT  where
# dbid is a number between 0 and 'databases'-1
# luroConnect : set databases to 1 - we use a separate redis process  for each type
databases 1

################################ SNAPSHOTTING  #################################
#
# Save the DB on disk:
#
#   save  
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving at all commenting all the "save" lines.

# save 900 1
# save 300 10
# save 60 10000
# luroConnect - for Magento disable save except for session
# without this a restart will cause recently added items to guest cart to disappear
# or a user to get logged out
# session save every minute 
# save 60
# FPC and Magento cache - do not ssave. Cost of saving in a busy site can be high
# redis restarts only on service start which is not often

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
# 
# Also the Append Only File will be created inside this directory.
# 
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis/

################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#
# slaveof  

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth 

# When a slave lost the connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
#    still reply to client requests, possibly with out of data data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale data is set to 'no' the slave will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO and SLAVEOF.
#
slave-serve-stale-data yes

# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10

# The following option sets a timeout for both Bulk transfer I/O timeout and
# master data or ping response timeout. The default value is 60 seconds.
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60

################################## SECURITY ###################################

# Require clients to issue AUTH  before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
# 
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

# Command renaming.
#
# It is possilbe to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# of hard to guess so that it will be still available for internal-use
# tools but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possilbe to completely kill a command renaming it into
# an empty string:
#
# rename-command CONFIG ""

################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limits.
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 128

# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# accordingly to the eviction policy selected (see maxmemmory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# an hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory 

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
# 
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
# 
# Note: with all the kind of policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy volatile-lru

# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
# maxmemory-samples 3
# luroConnect - set maxmemory depending on use and requirement
# typically for Magento cache and session do not set but monitor
# for FPC set to the max available memory as it tends to grow
# if maxmemory is set, use allkeys-lru pollicy
maxmemory 2G
maxmemory-policy allkeys-lru

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.aof. This file will
# be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.

appendonly no

# The name of the append only file (default: "appendonly.aof")
# appendfilename appendonly.aof

# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush 
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
#
# The default is "everysec" that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving the durability of Redis is
# the same as "appendfsync none", that in pratical terms means that it is
# possible to lost up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
# 
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
# 
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (or if no rewrite happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a precentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
# 
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 1024

################################ VIRTUAL MEMORY ###############################

### WARNING! Virtual Memory is deprecated in Redis 2.4
### The use of Virtual Memory is strongly discouraged.

# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
#
# To enable VM just set 'vm-enabled' to yes, and set the following three
# VM parameters accordingly to your needs.

vm-enabled no
# vm-enabled yes

# This is the path of the Redis swap file. As you can guess, swap files
# can't be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
#
# The best kind of storage for the Redis swap file (that's accessed at random) 
# is a Solid State Disk (SSD).
#
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
vm-swap-file /tmp/redis.swap

# vm-max-memory configures the VM to use at max the specified amount of
# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
# is, if there is still enough contiguous space in the swap file.
#
# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it's
# better to leave some margin. For instance specify an amount of RAM
# that's more or less between 60 and 80% of your free RAM.
vm-max-memory 0

# Redis swap files is split into pages. An object can be saved using multiple
# contiguous pages, but pages can't be shared between different objects.
# So if your page is too big, small objects swapped out on disk will waste
# a lot of space. If you page is too small, there is less space in the swap
# file (assuming you configured the same number of total swap file pages).
#
# If you use a lot of small objects, use a page size of 64 or 32 bytes.
# If you use a lot of big objects, use a bigger page size.
# If unsure, use the default :)
vm-page-size 32

# Number of total memory pages in the swap file.
# Given that the page table (a bitmap of free/used pages) is taken in memory,
# every 8 pages on disk will consume 1 byte of RAM.
#
# The total swap size is vm-page-size * vm-pages
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
#
# It's better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
vm-pages 134217728

# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can't help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
#
# The special value of 0 turn off threaded I/O and enables the blocking
# Virtual Memory implementation.
vm-max-threads 4

############################### ADVANCED CONFIG ###############################

# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
hash-max-zipmap-entries 512
hash-max-zipmap-value 64

# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happens to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
# 
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

Varnish vs Redis FPC

Varnish sits at the edge and caches pages as they are served. Redis FPC sits inside the Magento application and stores the pages as they are served.
A Varnish application gets the variable components either via VCL or ajax. FPC being inside the application can use the block directly inside the page.

Magento 2 supports both varnish and FPC out-of-the-box. Varnish outperforms redis FPC but takes a bit more chutzpah to get working with https. Magento has a good explanation on the setup at https://devdocs.magento.com/guides/v2.2/config-guide/varnish/config-varnish.html

Setting up the hosting user

Magento suggests using a 2-user system for hosting. What this means is that the php files are not writable by the web server or the php process. Create a user that we will call as the hosting user and add nginx to the hosting user’s group

useradd -G nginx <hostinguser>

Setup directory www with Magento.

Ensure media and var folders have permissions for group write but other folders do not

cd /home/;/www
find . -type d -exec chmod 750 {} \;
find . -type f -exec chmod 640 {} \;
find ./media -type d -exec chmod 770 {} \;
find ./media -type f -exec chmod 660 {} \;
find ./var -type d -exec chmod 770 {} \;
find ./var -type f -exec chmod 660 {} \;

Conclusion

We have presented with detailed configuration files in a DIY guide to Magento hosting. If you need to setup multiple servers, the basic idea is similar some configurations may have to change.

Feel free to post your comments and we will improve this guide.

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Watch our webinar on performance and scaling in Magento

Its free!

Using analogy to vehicular traffic we explain performance and scaling in Magento.
Key takeaways

  • Know how to compare hosting options
  • Importance of good code
  • How to scale
  • Tuning Magento

Enterprise grade security for your Magento website

At a time when data theft and leaks are more real than ever, don’t be the next one to explain the reason for a security breach. With threats growing in number, type and modality of attack every day, multiple layers of security are needed, so if one layer were to be breached, another would yet protect.

Introduction

Today, security of a Magento web server is a major requirement for the success of the business that runs on it. With threats growing in number, type and modality of attack every day, multiple layers of security are needed, so if one layer were to be breached, another would yet protect.
With attackers using automated scanners or bots, being small or infamous is no reason to feel secure. Attackers will probe for vulnerabilities and break in before deciding if you had anything of worth.

In this article we discuss various aspects of security and how luroConnect helps in getting your website an enterprise grade security.

Securing the Magento Application

Magento patch updates

Magento is a well maintained software. Magento routinely releases patches even for the community edition. Keeping patches uptodate is important. When magento releases a patch, while it helps you keep your site secure, it also announces vulnerabilities of an unpatched system to the world.
A major hurdle in patch application is when the Magento core files have been modified by developers. Developers do this in answer to high time and cost pressures or for not knowing better.

luroConnect service always patches customers at the earliest — if you have not opted out of the service.

Purchasing plugins from dependable vendors

While Magento is good at giving updates using patches, plugin vendors are not so motivated. Plugins are paid for once and quite often updates require one to be on a maintenance plan. So, while selecting a plugin vendor, keeping the vendors track record would be good idea in the future.

There are many areas where vendors fail to think about security and the following are but a few examples.

Magmi is popular extension for product upload. However, it is known to have security issues. The developer acknowledges this and has advised not using the web interface. In our research and experience, most Magmi users use their UI, leaving security holes open. luroConnect makes magmi installation more secure by providing htaccess based second login, securing the magmi installation with permissions that disallow updates to this from the UI as well as disallowing php execution from Magento’s media folders.

Some extensions require ability to write to folders and files that are unreasonable. With default access this seems to work but opens security holes. In our opinion rather than store plugin configurations in the database (core_config_data), plugins use files, sometimes generating php code. An example of a plugin (Aitoc_AdvancedPermissions) requires ability to change the etc/modules/Aitoc_AltPermissions.xml file.

Some extensions use encrypted code — typically requiring ion decrypter. Apart from being legally on the fence, the requirement of installing a decrypter increases the risk of encrypted malware, making it difficult to detect the existence of malware, even with disk scans.

PhpMyAdmin is another popular extension that makes a site unsecure. PhpMyAdmin uses the database username and password. If accessed from a unsecured url, the credentials of the database are transmitted over the internet in plain text form. Even with a secure URL, the database is left to be probed via username / password detection BOT that can try combinations.

Magento’s open architecture has made a marketplace for developers. Custom development is crucial to the Magento’s success. Custom development from vendors who are aware of security is also crucial. Some aspects that we recommend asking developers include awareness in use of form keys, sql injection and avoidance techniques — specifically validating all input parameters before processing and use of Magento’s model architecture vs writing sql.

Securing the Magento admin panel

Using host file entries instead of DNS update for non public sites

DNS entries are the starting point for all BOTs. Windows, Macs and linux allow an entry in the host file that helps the browser resolve a domain name (such as staging.domain.com or phpMyAdmin.domain.com) to a IP address. While not easily doable on a mobile for testing, we encourage use of host file entries to hide the actual domain name used.

2-password Secure admin access

Using htaccess (even in nginx) to protect admin URLs with a double authentication system, with both passwords being different. This thwarts most BOTs as they do not expect a basic authentication.

Changing the admin url, possibly once a year.

Magento allows obfuscation of the admin url. Selecting a non obvious url for admin and changing it as often as practical helps in preventing leakage of the url. In addition using a different admin url for development and staging compared to production will help keeping the URL on a need to know basis.

Managing roles in admin

Magento admin supports role based access. Giving roles on a need to know basis will protect against a threat popularly referred as “malicious insiders”.

Use of captcha in admin and end user login

Magento supports enabling captcha in admin. That ensures BOTs do not access the site and it thwarts possible username / password combination hacks.

Using strong admin passwords and change them often.

This is general advice for any password. Changing them often is also a general advice.

Hiding Administrators role.

Recently admin logins have been used to insert malicious code in header or footer "miscellaneous" html section. That has prompted us to advice to hide the administrator role - i.e. have no user with the permission to change that. When needed the role can be enhanced temporarily to access that setting.

Database containment and Filesystem Security

Limited privileges to write to code directories

System level file, directory (folder) and process permissions are crucial. A different hosting user and web server user with limited permissions to the web server user is crucial to protect code directories. The hosting user can update the code but the web user is not able — preventing any attempt to upload code.

Plugin and code updates only allowed via deployment process

The deployment process may be as simple as git pull (we prefer git stash before git pull) or a CI based mechanism such as Jenkins.

Readonly git access to live site

If git pull is used to update a site, it is important to ensure that the git user used has readonly access. Even using a CI ensure the git user has readonly access.

Configuration file protection against unauthorized changes

Ensuring site specific configuration files such as env.php or local.xml are in their own git and versioned ensures a protected master copy. Monitoring changes to these files and alerting would be a good way to ensure their security.

Only authorized access to code

The webserver should not have permission to write to the code directory. That requires plugin updates via download manager are done on a staging or development server and code moved via git. Ensuring automation of code deployment will ensure only controlled access to the code.

Watch our webinar on performance and scaling Magento

Its free!

Using analogy to vehicular traffic we explain performance and scaling in Magento.
Key takeaways

  • Know how to compare hosting options
  • Importance of good code
  • How to scale
  • Tuning Magento

Security at the edge — before traffic hits your application

Nginx has many features that allow a very decent edge security — including protection against basic DOS attacks. However, one has to understand the terse configuration options and deflecting a DOS attack may require some manual steps during the attack. There are more automated solutions that one can consider for edge security.

Rate limiting to block bots from accessing the site — with whitelist

Nginx supports rate limiting. Rates can be set for a subset of URLs. Whitelisting of IPs is also allowed so for example your own IP need not have the limitation. Rates are configurable and can be specified in hits per seconds or minute and are best tested for a site for a appropriate number, and then monitored. luroConnect / Insight dashboard shows all IPs that were blocked helping in deciding to either block the IP or add it to whitelist.

Automatic blocking of spambots by whitelisting bots

Nginx also supports an ability to block BOTs using regular expressions for the User Agent identify text each BOT gives. Note that User Agents are easily faked, so this is not a very reliable way of detecting BOTs.

Blacklist IPs from site access

Nginx allows blacklisting IPs from accessing a site. A more advanced geoip resitrction can also be availed. luroConnect / Insight dashboard gives top 10 IPs accessing the site on any particular day. This information can be used to decide which IPs need to be blocked. Note that attackers quite often do not use static IPs, so the banned list needs periodic review or release from blacklist.

Preventing php scripts from running in media folder

Media folders will be open to update and write by the web process. Restricting execution of php on these folders will prevent a malicious upload to execute.

Encryption of user data

SSL certificate

Being fully secure (using https over TLS) is becoming a requirement with Google’s ranking and expected indication. However, Magento has secure and unsecure URLs and by default all user data uses a secure URL. With LetsEncrypt project issuing auto renewable SSL certificates for free, there is no reason for a site not be fully secure. luroConnect service includes free LetsEncrypt certificates and auto redirect rules for http to https.

Securing backup

Security is as strong as the weakest link — securing the backup

Backup file transmission encrypted

Securing backup is an essential component of securing data. Using sftp or rsync over ssh one can ensure backup files are transmitted securely.

Backup server shut after backup — ensure the backup is secure

Backup data can be secured at rest. luroConnect goes a step further — backup server is different for each customer and run only when needed. The server is started before a backup starts and shut down at the end of the backup, ensuring its security.

Disk scans

Scanning disks for known vectors regularly is important.

OS Security

Patch updates

Securing the server is ensured by patching the server with latest security updates and ensuring only required ports are open ensures security.

Enable logs and monitor them

Logs for outgoing emails possibly with subject line, mysql, web server, Magento logs should be enabled and monitored either automatically and continuously or manual and periodically. For example a spam email hack into the site may have left a malicious script that will send out emails. These emails and subject lines will be in the mail log. A high count of emails for example can be a reason to suspect a compromised system.

Conclusion - Securing Magento requires many steps

With new and sophisticated attacks on the rise, having multiple layers is important — a layer may occasionally break but the harm may not come as another layer will kick in. Nothing less than an entire set of enterprise grade security will be needed to secure a Magento site so it is always up for business. luroConnect managed hosting gives you all this and more.

For a study of recent Web security breaches, read our article "Analysis of Recent Website Security Breaches"

luroConnect is a managed hosting service for Magento, wherever you are hosted. Multi layered security is one of the aspects.

Securing your Magento site

Executive Summary

Owning a Magento website means you have a resource you own and control on the internet. Keeping it secure in all aspects is a responsibility. You need to ensure

  • your business data is secure.
  • your customer data and privacy is secured.
  • your customer's computers are protected when accessing your website.

Your privacy policy indicates your desire to keep your customers' data secure. You would intentionally not violate the policy but a cyber attack on your infrastructure can cause a violation.

It is not required to be a famous to be in the eyes of the attacker - most attackers today prowl the internet looking for vulnerable sites and automatically get to work. New types of attacks are being designed and hence keeping a site protected is a continuous battle.

As a person ultimately responsible for your website, you need to periodically review your security processes and see they are enhanced for new vulnerabilities.

This article has an overview of the aspects of security you need to worry about. In addition it goes in depth as well to make it actionable.

Aspects of security infrastructure for a Magento Website

  • Protection of the web edge infrastructure. Ensure you keep the bad guys out and the good guys in. Using a Web Application Firewall (WAF) with appropriate configuration will help.
  • Server protection. Ensure access to the server restricted, preferrably without a password, limit port access and protect files and folders with permissions that are just needed. Also ensure the server has the latest security patches installed. Need to know access should enforced. Code updates should be automated. A staging site should be used for updates from vendors. Plugins should be acquired from known and reputable vendors.
  • Code protection. Ensure your code is patched with the latest Magento (or any platform and plugins). In addition train developers to use safe practices to not leave holes.
  • User data protection in transit. Ensure https access to the entire site so all access is secured against in transit hacking as users use your website. Ensure all admin access is restricted either with dual password or two factor authentication or IP restricted.
  • Review all access periodically Change passwords and admin URLs regularly.
  • Run vulnerability scanners Vulnerability scanners are available for testing many aspects of security. Blackbox scanners such as Trustwave, Secure Works, help in checking if they can find external vulnerabilities. Static analyzers such as Fortify scan the code to find code level vulnerabilities.

Protecting Web Edge Infrastructure

The key role of this protection is to keep the bad guys out and allow the traffic you want in.

At the minimum configure nginx or apache (your webserver) to

  • rate limit hits from a single IP address
    However, if you get legitimate hits from behind a firewall these may generate false positives. We recommend to log the IPs that get restricted and review. Whitelist the ones that are ok.
  • rate limit admin server access if you cannot IP limit the access to admin
    Many bots are trying passwords to get acsess to the admin panel
  • Have a mechanism in place to block IPs.

As with any protection of this nature, detecting bad is a key. OWASP is an online community which has come up with a core rule set (CRS). A Magento site needs a WAF protection either on the nginx server level or with an external service such as Webscale Networks, Cloudflare, Sucuri.
The OWASP ModSecurity Core Rule Set (CRS) provide protections against the following category of attacks.

  • SQL Injection (SQLi)
  • Cross Site Scripting (XSS)
  • Local File Inclusion (LFI)
  • Remote File Inclusion (RFI)
  • Remote Code Execution (RCE)
  • PHP Code Injection
  • Metadata/Error Leakages
  • Project Honey Pot Blacklist
  • GeoIP Country Blocking, etc
System Admin Note

mod_security is a open source project available for apache and nginx. Installation is not difficult. However, you need to tune the rules to your environment. Some rule tuning may require developer or marketing input.

Protect the admin panel login

  • Change the admin URL periodically. Do not use the same name as in test or development environemnts
  • IP restrict admin access to only your known IPs
  • Put a simple http access in front of the admin URL - so there are 2 levels of usernames and passwords and it makes it non standard for automatic hacks
  • 2 factor authentication to admin panel – may need a plugin
  • Change all admin user passwords frequently
  • Monitor log file for frequent unsuccessful attempts to login

Server Protection

A hacked or broken in server - popularly shown in movies - gives access to your server to the hacker.  Web prowlers are continuously trying to see what ports are open in the server and what access they can get. Being a very traditional form of hacking, it is highly automated. By the very same count, protecting a linux server is not very difficult either. Follow some guidelines and review process.

  • Keep only known ports open and limit which IPs can access these ports
  • Disable any form of password access (like ftp, ssh). Use keys instead.
  • Keep server patched with latest security patches
  • Run only services that you need. For example never run ftp as it gives password based access to the server.
  • Follow very strict rules on file / folder permissions as well as linux groups and users
  • Periodically scan servers for viruses signatures
  • Periodically review access keys
  • Automate routine processes and restrict with keys including code update
  • If a db server is a separate server further restrict access to this server . If using a autoscale architecture, further restrict access to app servers as well.
  • Allow no exceptions - even for a critical fix do not give access to a developer for example.

The following system admin note has technical details for linux system administrators to execute the strategies above

System Admin Note

Depending on the configuration we recommend the following incoming ports to be open :

On the only app server without load balancer
Port of external interface Protocol / usage
80 http, main site open to the world if site not fully secure.
443 https, main site open to the world
22 ssh, possibly open only to specific IP addresses
On the db server
Interface / port Protocol / usage
External / 22 Only if you must to restricted IP addresses
Internal / 6603 Restricted to internal IPs that run app servers
Internal / 22 Ssh access from the other servers
Constellation of app servers, a db server and a nginx load balancer
Host / Interface / port Protocol / usage
Load balancer / external / 80, 443 http and https open to the world
Load balancer / external / 22 ssh, possibly open only to specific IP addresses
Load balancer / internal /2049 Nfs server
Db server / internal / 6603 Mysql, for app server access
App server / internal /1110 Nfs, for file access across all app servers

Note : If using a load balancer device or service, port 22 access should be given to only the main app server, possibly the nfs host.

System Admin Note : ssh configuration for secure access

ssh needs to be configured to a more secure mode. (Refer linux man page)

  • Change default port to non standard port (say) 2020. This will thwart an obvious attempt to breakin. However, an attacker can find the port using a port scan.
  • Disable root ssh access. This is crucial. This means that someone can never login to the server directly as root.
  • Disable password access. Weak passwords are the most common way a breakin is successful. It is common that a server being attacked will have multiple password generators trying to access the server.
  • Enable only key access. With this setting only a private key holder who has a public key stored in the .ssh/authorized_keys file will be allowed access.

/etc/ssh/sshd_config :

...
# set port to non standard 2020
Port 2020
...

# Authentication:
#LoginGraceTime 2m
#PermitRootLogin yes
PermitRootLogin no
...
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
PasswordAuthentication no
...
# GSSAPI options
#GSSAPIAuthentication no
GSSAPIAuthentication yes

Note : First enable key access, exchange keys, ensure ssh to keys works before disabling password

Note : When changing ssh configuration ensure a ssh session to the root server is separately running. This will prevent from being locked out of the server when changing ssh configuration.

System Admin Note : What users are needed

We recommend the following users

  • Login user
    • This user has sudo access (without password)
    • Ssh key exchanged
    • Admin logs in as this user
  • Site user (production)
    • Website will be hosted under this users (/home/production/www NOT /var/www/html)
    • This user may have ssh access to deploy code
    • This user does not have sudo access
    • If using multiple app servers, this users id will be shared across all app servers
  • Web server user (apache)
    • Nginx and php processes will be run as this user
System Admin Note : File and Directory Permission settings
  • Only media/pub and var folders should have 770 permissions
  • All other folders in should have 750
  • All files under var should have 660
  • All other files should have 640
  • production user should be in apache group

Protect your code

  • Ensure Magento code is patched to the latest. Subscribe to Magento updates and setup process to check for patch releases on a monthly basis.
  • When the plugin vendor. Many simple looking community plugins can be weak on security and leave the site prone to attacks.
  • Train developers to use safe practices.
  • Keep the code repository (svn / git) access secure and passwordless for developer access.
  • Ensure external respository provider web access is secure and with 2-factor authentication.
  • Ensure backup servers are protected with access to write only given to automatic backup process

User Data Protection in Transit

Customer data in transit is the protection of data sent and received from your website to their browser. Internet connections to websites are not direct - the data hops from server to server in between. A server compromised on the way can tap into your data without anyone knowing it.

  • Using https for the complete site is an obvious requirement. Using higher security options in https is even better.
  • Emails sent should not contain sensitive data. A recent Magento patch fixed such an issue - passwords were earlier sent in emails.

Review all access periodically

Setup a security task force that reviews and reports on security once a month.
The charter of the task force would be evaluate the following

  • Who has access the the server and do they yet require this access. The access may be via keys and when a employee or contractor leaves, the key should be removed
  • Who has access to development environment. Have required access been withdrawn appropriately.
  • Who has access to version control systems. Have required access been withdrawn appropriately.
  • Admin access Are all admin users yet needing access?
  • Were admin passwords reset at agreed frequency?
  • Magento Patch status
  • OS patch and update status
  • External security scan (if arranged) status
  • Action taken report for the last period on any security issues
  • Backup and restore process review to ensure data and code is appropriately backed up

Conclusion

Protecting a eCommerce asset like a website in these times of automated hacks is a major challenge. Periodic reviews is a must as the attacks and defenses are quickly evolving. Being a small site is no excuse - the attackers, typically robots do not care.

Watch our webinar on performance and scaling in Magento

Its free!

Using analogy to vehicular traffic we explain performance and scaling in Magento.
Key takeaways

  • Know how to compare hosting options
  • Importance of good code
  • How to scale
  • Tuning Magento

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Secure access to a Magento server

Today the biggest threat to your Magento production server are external threats – of being hacked. While you may not be a high value target, hackers run crawlers on the internet to discover servers with weak security and attack. In this article we discuss secure access to a Magento server. An OS level attack if successful can only be fully repelled by re-imaging the server. But preventing a OS level attack is easier than you think – if you follow some simple guidelines. A Magento production server should have restricted access for all. Insecure, password based access should be disabled. If more than one server is used in a constellation, ssh access to the setup should be restricted to only one server. (more…)