Tuesday Apr 30, 2013

Big Fish Selects MySQL Cluster for Real-Time Web Recommendations

The world's largest producer of casual games has selected MySQL Cluster to power its real-time recommendations platform.

High velocity data ingestion, low latency reads, on-line scaling and the operational simplicity delivered by MySQL Cluster has enabled Big Fish to increase customer engagement and deliver targeted marketing, providing a more personalized experience to its users.

You can read the full Big Fish Games and MySQL Cluster case study here - and a summary below

BUSINESS NEED

The global video gaming market is experiencing explosive growth. Competition is intense, and so to differentiate services and engage users, progressive gaming companies such as Big Fish are seeking solutions to more fully personalize the customer experience.

Using Business Intelligence (BI) and predictive analytics Big Fish can segment customers based on a range of demographic and behavioural indicators. This enables Big Fish to serve highly targeted recommendations and marketing, precisely personalized to a user's individual preferences.

Big Fish's Marketing Management Service platform, powered by MySQL Cluster, is used across all of the company's customer management systems, including customer support and the company's "Game Manager", to provide a unique customer experience to each of its millions of monthly users whenever they come in contact with Big Fish.

TECHNOLOGY SELECTION

Big Fish already has an extensive deployment of MySQL databases powering web applications, including the storefront. They knew MySQL could power the recommendations database, but would require additional engineering efforts to implement database sharding to support data ingest and future scaling needs, coupled with a Memcached layer for low-latency reads.

As a result, they began evaluations of MySQL Cluster, in addition to other database technologies. Using MySQL Cluster, the Engineering teams were able to leverage their existing MySQL skills, enabling them to reduce operational complexity when compared to introducing a new database to the Big Fish environment.

At the same time, they knew MySQL Cluster, backed by Oracle, provided the long-term investment protection they needed for the MMS recommendations platform.

Through their evaluation, the Big Fish engineering team identified MySQL Cluster was best able to meet their technical requirements, based on:

Write performance to support high velocity data ingest

Low latency access with in-memory tables 

On-line scalability, adding nodes to a running cluster

Continuous availability with its shared-nothing architecture

SQL and NoSQL APIs to the cluster supporting both fast data loading and complex queries

PROJECT IMPLEMENTATION

As illustrated in the figure below:

  • User data is replicated from the MySQL databases powering the gaming storefront to the Big Fish BI platform;
  • User data is analyzed and segmented within the BI platform;
  • Recommendations are loaded as user records into MySQL Cluster using the NoSQL Cluster_J (Java) Connector;
  • The SQL interface presented by the MySQL Servers then delivers personalized content to gamers in real-time, initially serving over 15m sessions per day.


Big Fish has subscribed to MySQL Cluster CGE providing the Engineering team with access to 24x7 Oracle Premier Support and MySQL Cluster Manager, which reduces operational overhead by providing:

  • Automated configuration and reconfiguration of MySQL Cluster;
  • Automated on-line node addition for on-demand scaling.

The online scalability of MySQL CGE can help Big Fish to meet future requirements, as it expands its use of MySQL CGE to include all new website developments, channels and gaming platforms. 

LEARN MORE

Read the full Big Fish Games and MySQL Cluster case study

Read the press release 


Thursday Jul 21, 2011

Scaling Web Databases, Part 2: Adding Nodes, Evolving Schema with Zero Downtime

In my previous post, I discussed scaling web database performance in MySQL Cluster using auto-sharding and active/active geographic replication - enabling users to scale both within and across data centers.  

I also mentioned that while scaling write-performance of any web service is critical, it is only 1 of multiple dimensions to scalability, which include:

- The need to scale operational agility to keep pace with demand. This means being able to add capacity and performance to the database, and to evolve the schema – all without downtime;

- The need to scale queries by having flexibility in the APIs used to access the database – including SQL and NoSQL interfaces;

- The need to scale the database while maintaining continuous availability.

All of these subjects are discussed in more detail in our new Scaling Web Databases guide.

In this posting, we look at scaling operational agility. 

As a web service gains in popularity it is important to be able to evolve the underlying infrastructure seamlessly, without incurring downtime and without having to add lots of additional DBA or developer resource.

Users may need to increase the capacity and performance of the database; enhance their application (and therefore their database schema) to deliver new capabilities and upgrade their underlying platforms.

MySQL Cluster can perform all of these operations and more on-line – without interrupting service to the application or clients.  

On-Line, On-Demand Scaling

MySQL Cluster allows users to scale both database performance and capacity by adding Application and Data Nodes on-line, enabling users to start with small clusters and then scale them on-demand, without downtime, as a service grows. Scaling could be the result of more users, new application functionality or more applications needing to share the database.

In the following example, the cluster on the left is configured with two application and data nodes and a single management server.  As the service grows, the users are able to scale the database and add management redundancy – all of which can be performed as an online operation.  An added advantage of scaling the Application Nodes is that they provide elasticity in scaling, so can be scaled back down if demand to the database decreases.

When new data nodes and node groups are added, the existing nodes in the cluster initiate a rolling restart to reconfigure for the new resource.  This rolling restart ensures that the cluster remains operational during the addition of new nodes.  Tables are then repartitioned and redundant rows are deleted with the OPTIMIZE TABLE command.  All of these operations are transactional, ensuring that a node failure during the add-node process will not corrupt the database.

The operations can be performed manually from the command line or automated with MySQL Cluster Manager , part of the commercial MySQL Cluster Carrier Grade Edition.

On-Line Cluster Maintenance

With its shared-nothing architecture, it is possible to avoid database outages by using rolling restarts to not only add but also upgrade nodes within the cluster.  Using this approach, users can:

- Upgrade or patch the underlying hardware and operating system;

- Upgrade or patch MySQL Cluster, with full online upgrades between releases.

MySQL Cluster supports on-line, non-blocking backups, ensuring service interruptions are again avoided during this critical database maintenance task.  Users are able to exercise fine-grained control when restoring a MySQL Cluster from backup using ndb_restore. Users can restore only specified tables or databases, or exclude specific tables or databases from being restored, using ndb_restore options --include-tables, --include-databases, --exclude-tables, and --exclude-databases.

On-Line Schema Evolution

As services evolve, developers often want to add new functionality, which in many instances may demand updating the database schema.  

This operation can be very disruptive for many databases, with ALTER TABLE commands taking the database offline for the duration of the operation.  When users have large tables with many millions of rows, downtime can stretch into hours or even days.

MySQL Cluster supports on-line schema changes, enabling users to add new columns and tables and add and remove indexes – all while continuing to serve read and write requests, and without affecting response times.

Unlike other on-line schema update solutions, MySQL Cluster does not need to create temporary tables, therefore avoiding the user having to provision double the usual memory or disk space in order to complete the operation.

Summary

So in addition to scaling write performance, MySQL Cluster can also scale operational agility.  I'll post more on scaling of data access methods and availability levels over the next few weeks.

You can read more about all of these capabilities in the new Scaling Web Databases guide.  

And of course, you can try MySQL Cluster out for yourself - its available under the GPL:

The GA release is 7.1 which can be downloaded here, but I'd recommend taking a look at the latest Development Milestone Release for MySQL Cluster 7.2 which has some great new capabilities (localized JOIN operations, simpler provisioning, etc) which can be downloaded from here (select the Development Releases tab).

As ever, let me know if there are other dimensions of scalability that I should be discussing 

About

Get the latest updates on products, technology, news, events, webcasts, customers and more.

Twitter


Facebook

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
5
6
9
10
11
12
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today