Tuesday Apr 30, 2013

NetMotion Wireless Migrates Product from Microsoft SQL Server to MySQL: Reduces Costs and Increases Flexibility


NetMotion Wireless develops software to manage and secure wireless data deployments for organizations with mobile field workers. Founded in 2001, NetMotion Wireless is one of the fastest growing wireless and technology companies and the recipient of over 25 awards for outstanding technology.  NetMotion Wireless has over 2,500 customers, including Advocate Healthcare, Comcast, and Unilever.


NetMotion Wireless had used Microsoft SQL Server with Mobility Analytics, their mobile VPN product’s analytics module, but found it was too costly and lacked the platform and language flexibility to be a good [embedded] database. As a result, when the NetMotion Wireless product team was developing Locality, a cellular network management product, they decided to find an alternative database to use with it and with Mobility Analytics. The team compared MySQL with Microsoft SQL Server on a number of key criteria and, after finding nothing in the “con column”, decided to use MySQL.


According to Jonathan Wiggs, NetMotion Wireless database architect, MySQL has been able to meet their products’ needs for:

  • Real-time Data Collection and Analysis – with the ability to scale to hundreds of terabytes.
  • Strong .NET and Java Support“MySQL works every bit as well with ado.net as SQL Server, and its Java integration is equally good,” said Wiggs.
  • Database Profiling and Tracing – MySQL Enterprise Monitor provided back level visibility on par with other enterprise databases, providing the data analytics and caching levels their products require.  
  • Easy, Transparent Administration -“MySQL’s administration is automated, easy to configure, and highly configurable: we can get it to leap through any flaming hoops we need,” said Wiggs.
  • “The Perfect Trifecta” - “There has been no tradeoff in using MySQL.  MySQL has provided the perfect trifecta of feature parity, ease of integration, and lower cost,” said Wiggs.

Read more>>

Big Fish Selects MySQL Cluster for Real-Time Web Recommendations

The world's largest producer of casual games has selected MySQL Cluster to power its real-time recommendations platform.

High velocity data ingestion, low latency reads, on-line scaling and the operational simplicity delivered by MySQL Cluster has enabled Big Fish to increase customer engagement and deliver targeted marketing, providing a more personalized experience to its users.

You can read the full Big Fish Games and MySQL Cluster case study here - and a summary below

BUSINESS NEED

The global video gaming market is experiencing explosive growth. Competition is intense, and so to differentiate services and engage users, progressive gaming companies such as Big Fish are seeking solutions to more fully personalize the customer experience.

Using Business Intelligence (BI) and predictive analytics Big Fish can segment customers based on a range of demographic and behavioural indicators. This enables Big Fish to serve highly targeted recommendations and marketing, precisely personalized to a user's individual preferences.

Big Fish's Marketing Management Service platform, powered by MySQL Cluster, is used across all of the company's customer management systems, including customer support and the company's "Game Manager", to provide a unique customer experience to each of its millions of monthly users whenever they come in contact with Big Fish.

TECHNOLOGY SELECTION

Big Fish already has an extensive deployment of MySQL databases powering web applications, including the storefront. They knew MySQL could power the recommendations database, but would require additional engineering efforts to implement database sharding to support data ingest and future scaling needs, coupled with a Memcached layer for low-latency reads.

As a result, they began evaluations of MySQL Cluster, in addition to other database technologies. Using MySQL Cluster, the Engineering teams were able to leverage their existing MySQL skills, enabling them to reduce operational complexity when compared to introducing a new database to the Big Fish environment.

At the same time, they knew MySQL Cluster, backed by Oracle, provided the long-term investment protection they needed for the MMS recommendations platform.

Through their evaluation, the Big Fish engineering team identified MySQL Cluster was best able to meet their technical requirements, based on:

Write performance to support high velocity data ingest

Low latency access with in-memory tables 

On-line scalability, adding nodes to a running cluster

Continuous availability with its shared-nothing architecture

SQL and NoSQL APIs to the cluster supporting both fast data loading and complex queries

PROJECT IMPLEMENTATION

As illustrated in the figure below:

  • User data is replicated from the MySQL databases powering the gaming storefront to the Big Fish BI platform;
  • User data is analyzed and segmented within the BI platform;
  • Recommendations are loaded as user records into MySQL Cluster using the NoSQL Cluster_J (Java) Connector;
  • The SQL interface presented by the MySQL Servers then delivers personalized content to gamers in real-time, initially serving over 15m sessions per day.


Big Fish has subscribed to MySQL Cluster CGE providing the Engineering team with access to 24x7 Oracle Premier Support and MySQL Cluster Manager, which reduces operational overhead by providing:

  • Automated configuration and reconfiguration of MySQL Cluster;
  • Automated on-line node addition for on-demand scaling.

The online scalability of MySQL CGE can help Big Fish to meet future requirements, as it expands its use of MySQL CGE to include all new website developments, channels and gaming platforms. 

LEARN MORE

Read the full Big Fish Games and MySQL Cluster case study

Read the press release 


Friday Apr 26, 2013

MySQL Web Reference Architectures: On Demand

Last week we ran a live webinar presenting the new MySQL Web Reference Architectures, a set of documented and repeatable best practices for building highly available, scaleable and secure database infrastructure to power new generations of web and mobile services.

The webinar replay is now available on-demand so you can listen in from the comfort of your own desk…or commute.

As a taster - we discuss sizing and design patterns - you can see a sample below:




We received some great questions during the Q&A session which I felt would be useful to you, so I've included them below:

Q. You are talking a lot about MySQL 5.6. We are on MySQL 5.1 - what are the reasons for us to upgrade:

There is a great summary in the MySQL 5.6 developer / DBA guide. You will see the advantage of 5.6 for web and cloud applications in performance, scaling, agility,security, instrumentation and reliability.

Q. When using replication, the master sends only the log file for the slave?

A. The binary log contains all of the database writes on the master. The master then sends the data from this log file to each of the slaves so that they can apply the same writes. It doesn't send the log file itself but sends the changes as they're written to the file (or if a slave disconnects and then reconnects the master will send over all changes that it missed). You can find out more in the introduction to MySQL replication paper 

Q. Are there any open-source utilities to provide Virtual IP failover that can be integrated with mysqlfailover utility? Or should we use corosync and pacemaker with mysqlrpadmin?

Certainly you could use corosync and pacemaker - in which case you're right that mysqlrpladmin might be the best option as then you only have one brain deciding when to fail everything over. Alternatively you can bind your own scripts into mysqlfailover and you could use those to trigger a VIP failover

Q. What should we do to address any potential replication lag?

A. Fast cores on the slave would help. If you're able split your tables between different schemas (databases in MySQL terms) then you can exploit mutil-threaded slaves (parallel slave apply) in MySQL 5.6. Also take a look at optimized row-based replication in MySQL 5.6

Q. Does MySQL 5.6 offer any replication related improvements over 5.5?

A. You bet. Performance, reliability, flexibility. See the introduction paper referenced in the answer above 

Q. Any resources you know of on implementing the Medium-size MySQL web reference architecture with Ruby on Rails apps in the application layer?

A. General best practices outlines in the MySQL web reference architectures guide will apply. There are more specific MySQL / Ruby on Rails best practices here. You can also use community-developed MySQL Ruby connectors

Q. How can we integrate MySQL with Hadoop?

A. You have several options which can be used independently or together. Apache Sqoop provides batch transfers between MySQL and Hadoop, and is also fully bi-directional, so you can replicate the results of Hadoop Map Reduce jobs back to MySQL tables. There is also the new MySQL Applier for Hadoop which enables the streaming of events in real-time from MySQL to Hadoop.

You can learn more about all of the option from the MySQL and Hadoop guide to big data

Q. MySQL Cluster looks interesting, but when we looked at it in the past, it didn't support Foreign Keys, so not sure how you can suggest it as an option for eCommerce apps?

A. There are many eCommerce providers already running MySQL Cluster - Shopatron is a great example. The latest development milestone release of MySQL cluster 7.3 adds native support for Foreign Keys as well - so worth taking another look! 

Q. is it possible to customise auto-sharding when using MySQL Cluster?

A. Yes, you can choose which column within the primary key is to be used as the 'sharding key' - that way you can for example make sure that all data for a specific user will be in the same data node; this can help ensure linear scalability. Details in the MySQL Cluster Performance Optimization guide 

Q. Any advice on UserID generation for sharding?

A. One design pattern is to just use an auto-increment column and then hash on that. Another option is to use a UUID - you can then shard by ranges or by hashing

Q. Can we choose one : clustering or sharding? which one better for HA?

A. It depends on a number of factors such as what level of availability you need, how your application is architected, etc. A good resource to help you weigh up the pros and cons of the different approaches is the MySQL guide to high availability

Q. My company is using 5.5 with one master and slave for our web application. We are currently looking into a new system infrastructure using VM technology. What would you recommend?

A. There are many options available to you on multiple hypervisors. We have created a VM Template for MySQL running on Oracle VM and Oracle Linux. This is a pre-installed, pre-configured and certified software stack which you download as a single file and then provision to an Oracle VM Server farm. You can learn more about it from the Oracle VM Template for MySQL Enterprise Edition guide

Q. I'm interested in the MySQL monitoring tools and in the audit plug-in - how do I get these?

A. These are available as part of MySQL Enterprise Edition which also packages support, backup, security plug-ins, etc in an annual subscription.

Q. How much would it cost to run MySQL Enterprise Edition on that "perfect server"?

A. Pricing is available in the on-line store. Note that the list price is the same regardless of number of cores unless you exceed 4 CPU *sockets*

Q. i am sorry for this very novice question, but could I have some help regarding references that would allow me to digest the" base" of these technologies presented her?

A. Assuming it is case studies you are after, we have a lot of them published at the MySQL customers page. In the "Industry" drop down box, take a look at Web eCommerce, games, SaaS/Hosting and social networks


Thursday Apr 25, 2013

Driving MySQL Innovation

Oracle's VP of MySQL Engineering Tomas Ulin delivered on Tuesday a keynote entitled "Driving MySQL Innovation for Next Generation Applications" at the Percona Live Conference.

If you haven't seen it yet, we highly encourage you to watch it.

Tomas covers:

  • Oracle's Investment in MySQL
  • MySQL 5.6
  • Trends and Product Directions

He makes it very clear that Oracle:

  • Invests in MySQL like Never Before
  • Drives MySQL Innovation
  • Makes MySQL Better for Next Generation Web, Cloud and Big Data Applications


Enjoy the keynote!

Monday Apr 22, 2013

Announcing the MySQL Applier for Apache Hadoop

Enabling Real-Time MySQL to HDFS Integration

Batch processing delivered by Map/Reduce remains central to Apache Hadoop, but as the pressure to gain competitive advantage from “speed of thought” analytics grows, so Hadoop itself is undergoing significant evolution. The development of technologies allowing real time queries, such as Apache Drill, Cloudera Impala and the Stinger Initiative are emerging, supported by new generations of resource management with Apache YARN

To support this growing emphasis on real-time operations, we are releasing a new MySQL Applier for Hadoop to enable the replication of events from MySQL to Hadoop / Hive / HDFS (Hadoop Distributed File System) as they happen.  The Applier complements existing batch-based Apache Sqoop connectivity.

Replication via the MySQL Applier for Hadoop is implemented by connecting to the MySQL master and reading binary log events as soon as they are committed, and writing them into a file in HDFS.


The Applier for Hadoop uses an API provided by libhdfs, a C library to manipulate files in HDFS. The library comes precompiled with Hadoop distributions.

It connects to the MySQL master to read the binary log and then:

  • Fetches the row insert events occurring on the master
  • Decodes these events, extracts data inserted into each field of the row, and uses content handlers to get it in the format required
  • Appends it to a text file in HDFS.

Databases are mapped as separate directories, with their tables mapped as sub-directories with a Hive data warehouse directory. Data inserted into each table is written into text files (named as datafile1.txt) in Hive / HDFS. Data can be in comma separated format; or any other, that is configurable by command line arguments. 


You can learn more about the design of the MySQL Applier for Hadoop from this blog

The installation, configuration and implementation are discussed in detail in this Applier for Hadoop blog. Integration with Hive is also documented.

You can also see it in action from this MySQL Hadoop Applier Video Tutorial 

With the growth in big data projects and Hadoop adoption, it would be great to get your feedback on how we can further develop the Applier to meet your real-time integration needs. Please use the comments section to let the MySQL team know your priorities.

See the benefits of MySQL Cluster through Oracle Training

If the following items items describe what you need in a high-availability solution, then MySQL Cluster is for you:

  • High scale, reads and writes
  • 99.999% availability
  • Real-time
  • SQL and NoSQL
  • Low TCO

And what better way to get started on MySQL Cluster than taking the authentic MySQL Cluster training course.

In this 3-day course, you learn important cluster concepts and get hands-on experience installing, configuring and managing a cluster. Some events already on the schedule for this course include:

 Location

 Date

 Delivery Language

 London, England

 12 June 2013

 English

 Hamburg, Germany

 1 July 2013

 German

 Munich, Germany

 10 June 2013

 German

 Budapest, Hungary

 19 June 2013

 Hungarian

 Warsaw, Poland

 10 June 2013

 Polish

 Barcelona, Spain

 12 August 2013

 Spanish

 Madrid, Spain

 10 June 2013

 Spanish

 Istanbul, Turkey

 27 May 2013

 Turkish

 Irvine, CA, United States

 24 July 2013

 English

 Edison, NJ, United States

 29 May 2013

 English

 Jakarta, Indonesia

 5 August 2013

 English

For more information on this course or other courses on the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

Thursday Apr 18, 2013

Adzuna Relies on MySQL to Support Explosive Growth

Adzuna is a fast growing search engine for classified ads specialized in jobs, properties and cars. Headquartered in the UK and launched in 2011, Adzuna searches thousands of sites and tens of millions of ads to make it very easy to find the perfect job, home or car locally. It furthermore provides a wealth of statistics such as salaries trends graphs and comparisons, geographic jobs maps, house prices...and more. Additionally, Adzuna is integrated with Facebook and LinkedIn and shows open vacancies one is connected to through his/her own network. The search engine powers a number of government applications and is integrated into the UK's Prime Minister economic dashboard.

Challenges

  • When Adzuna's founders were selecting the database powering the search engine's architecture, they were planning for scalability and reliability. Not only did they expect fast growth but also unpredictable growth. The number of users could indeed jump from ten thousand to one million in a single day, and any downtime or scalability was simply not an option as it could turn away new users from the site forever, and undermine its reputation form the start.
  • As a Web startup, low Total Cost of Ownership was essential, and the team also desired to implement a database solution they would be able to customize and tailor to their specific needs.


Solution

  • The Adzuna team selected MySQL as the database powering their search engine. They had very positive previous experiences of the world’s most popular open source database, and particularly appreciated its performance and scalability, reliability and ease of use, including the quality of its technical documentation. They were confident it would scale according to their requirements, and were ready to bet their business on the database.
  • Key MySQL strong points included its overall suitability for highly demanding web-based applications, its Geographic Information System (GIS) support and integration with the Perl programming language used by the company.
  • Adzuna's search infrastructure uses MySQL to normalize on a minute by minute basis the data from tens of millions of ads from thousands of websites across four continents, resulting in a database size exceeding 100 GB. The Adzuna engine constantly scrolls the Web and receives XML feeds from its partners, and a key to its success is its ability to turn a massive amount of unstructured data into structured data that can be stored, understood and searched by it users. The team decided to use the Apache Solr open source search platform as front end, leveraging its NoSQL features. Data is subsequently transferred into MySQL. The following diagram describes the Adzuna architecture:
  • In order to help ensure scalability and reliability while focusing its resources on developing its business, The Adzuna team decided to rely on MySQL in the cloud, working with cloud services provider Media Temple.
  • The startup has experienced explosive growth since its launch, currently serving 2 million unique visitors per month, and it recently expanded in Germany, Australia, Canada, South Africa and Brazil.  MySQL totally fulfills the expectations of the Adzuna team, who is confident the database will scale to support their ambition to expand in 30 more countries and become the leading search engine for jobs, homes and cars worldwide.
  • As Taleo, Oracle's talent management cloud service, is increasingly used by Adzuna's partners, the company is currently looking at optimizing data transfer from Taleo to MySQL.

“My advice to young startups is to use MySQL, especially if you have high growth expectations. You’ll need to plan for unpredictability and to have a very robust backbone to support it, and that’s exactly what MySQL provides.” Andrew Hunter, Co-Founder, Adzuna

Wednesday Apr 17, 2013

MySQL Community Reception - Celebrate with Oracle's MySQL Team!

Join Oracle’s MySQL team on April 22 to celebrate the general availability of MySQL 5.6. Get together in a relaxed atmosphere—make a toast, enjoy product demos and fun activities, and celebrate the MySQL community effort and contributions that made the latest release possible. Register here today!

Date: Monday, April 22, 2013

6:30 PM – 8:30 PM

Location: TechMart Santa Clara
5201 Great America Parkway
Santa Clara, CA, USA
(1-minute walk from Hyatt Regency Santa Clara)

Don't miss the opportunity to enjoy an evening of casual but informative conversation, with complimentary food and refreshments from Oracle! Walk-ins are also welcome.

MySQL Cluster 7.3 DMR2: Increasing Developer Flexibility and Simplicity

Highlights: Foreign Keys, NoSQL JavaScript Connector, MySQL 5.6 and Auto-Tuned Clustering

The MySQL team at Oracle are excited to announce the immediate availability of the second MySQL Cluster 7.3 Development Milestone Release (DMR)

Some might call MySQL Cluster 7.3 “the foreign keys release” – and sure enough it is a major engineering achievement to build a distributed database that enforces referential integrity across a shared-nothing cluster, while maintaining ACID compliance and cross-shard JOINs. But MySQL Cluster 7.3 is a lot more as well. 

The design focus has been on enabling developer agility – making it simpler and faster than ever to enhance new services with a highly scalable, fault tolerant, real-time database – with minimum development or operational effort. 

The key enhancements delivered by MySQL Cluster 7.3 are summarized below.

Foreign Keys: Strengthens data modeling and simplifies application logic by automatically enforcing referential integrity between different tables distributed on different shards, on different nodes…..even in different data centers 

NoSQL JavaScript Connector for Node.js: Enables a single programming language and a single tool-chain by extending JavaScript from the client to the server, all the way through to the database, bypassing the SQL layer to deliver lower latency and reduced development cycles.  

MySQL 5.6 Support: Developers can combine the InnoDB and MySQL Cluster NDB storage engines within a single database, using the very latest MySQL 5.6 release.

Connection Thread Scalability: Increases cluster performance and capacity by improving the throughput of each connection to the data nodes, thus reducing the number of connections that need to be provisioned, and enabling greater scale-out headroom.  Current testing is showing up to 3x higher throughput per connection, enabling more client threads to use each connection. 

Auto-Installer: Get it all up and running in minutes! Graphically configure and provision a production-grade cluster, automatically tuned for your workload and environment, without ever resorting to “RTFM”.  

MySQL Cluster: Automated Tuning and Configuration  

Read the MySQL Cluster 7.3 DMR2 DevZone article for more detail on all of the enhancements discussed above.

You can download the source and binaries for MySQL Cluster 7.3 DMR2 today (select the Development Releases tab).  


MySQL Cluster Tutorial: NoSQL JavaScript Connector for Node.js

This tutorial has been authored by Craig Russell and JD Duncan

The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users:

- allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database

- native "NoSQL" access to the storage layer without going first through SQL transformations and parsing.

Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications.

While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future.

Changes since the previous blog:

- InnoDB is now supported via the mysql adapter (accesses data via the mysqld server)

- Auto-increment columns are now supported

- Default values in columns are now supported

- Multiple databases are now supported

- Column converters for JavaScript types that need special (user-written) mapping to database types are now supported

- Queries that specify all columns of a primary or unique key index are now supported

- When acquiring a connection or session, specific table or class metadata can be provided in order to pre-load database metadata and signal an error if not all metadata can be loaded

- Users can now get metadata for tables by using the session.getMetadata function

- The user interface to map JavaScript domain objects to database tables has been significantly simplified

Implementation

The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js.

To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database.

To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL (and again) server is between the user and the data.

The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The current release supports ndb via both native access and mysqld; and supports InnoDB via mysqld.

The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability.

The following sections take you through how to connect to MySQL, query the data and how to get started.


Connecting to the database

A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function:

var nosql = require("mysql-js");

var dbProperties = {

    "implementation" : "ndb",

    "database" : "test"

};

nosql.openSession(dbProperties, null, onSession);

The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects.

Default database

Every session and connection to the database has a default database associated with it. When mapping domain objects to the database, or using a table name to identify a table, users can specify the database by using the explicit form for the table name: 'tableName.databaseName'. If users omit the databaseName, the default database associated with the session is used.

This feature supports multi-tenancy by allowing the database name to be specified during connection, while allowing the table to dynamically refer to the specific database in use.

Pre-load metadata for tables or domain objects

If your application requires specific tables or domain objects to be available, you can specify the tables and domain objects in your connect or openSession function. For example, if you need the table 't_basic' and the domain object 'Employee' to run the application, you can specify these during the connect or openSession functions.

nosql.openSession(dbProperties, ['t_basic', Employee], onSession);

If the t_basic table or the mapped Employee domain object are not able to be used by the session, then an error will be signaled and the onSession callback will report the specific error.

Getting Table metadata

If getting metadata associated with your application's tables is important, you can get the information by using the session function getMetadata. This function will return information about the specified table in the same format as used in mapTable. You can get the names and types of the table's columns and use the information to dynamically access tables and columns.

Reading data

The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema:

create table employee (

  id int not null primary key,

  name varchar(32),

  salary float

) ENGINE=ndbcluster;

Since the primary key is a number, you can provide the key as a number to the find function.

function onSession = function(err, session) {

  if (err) {

    console.log(err);

    ... error handling

  }

  session.find('employee', 0, onData);

};

function onData = function(err, data) {

  if (err) {

    console.log(err);

    ... error handling

  }

  console.log('Found: ', JSON.stringify(data));

  ... use data in application

};

If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function.

function Employee = function(id, name, salary) {

  this.id = id;

  this.name = name;

  this.salary = salary;

  this.giveRaise = function(percent) {

    this.salary *= percent;

  }

};

annotations.mapClass(Employee, {'table' : 'employee'});

function onSession = function(err, session) {

  if (err) {

    console.log(err);

    ... error handling

  }

  session.find(Employee, 0, onData);

};

Special Domain Object Property Types

If your domain object uses types that do not map directly to database types, you can use column converters to transform domain types to database types.

For example, if you have a domain type such as a MaritalStatus type that contains only values of type MARITAL_STATUS, you can define a conversion that translates domain object values into database values.

var MARITAL_STATUS = {

NEVER_MARRIED: {value: 0, code: 'N', name: 'NEVER_MARRIED'},

MARRIED: {value: 1, code: 'M', name: 'MARRIED'},

DIVORCED: {value: 2, code: 'D', name: 'DIVORCED'},

lookup: function(value) {

switch (value) {

case 0: return this.NEVER_MARRIED; break;

case 1: return this.MARRIED; break;

case 2: return this.DIVORCED; break;

default: return null; break;

}

}

};


// column converter for status

var statusConverter = {

toDB: function toDB(status) {

return status.value;

},

fromDB: function fromDB(value) {

return MARITAL_STATUS.lookup(value);

}

};

Updating data

You can update the emp instance in memory, but to make the changes persistent, you need to write it back to the database, using the update function.

function onData = function(err, emp) {

  if (err) {

    console.log(err);

    ... error handling

  }

  console.log('Found: ', JSON.stringify(emp));

  emp.giveRaise(0.12); // gee, thanks!

  session.update(emp); // oops, session is out of scope here

};

Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables.

The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function:

function onSession = function(err, session) {

  if (err) {

    console.log(err);

    ... error handling

  }

  session.find(Employee, 0, onData, session);

};

function onData = function(err, emp, session) {

  if (err) {

    console.log(err);

    ... error handling

  }

  console.log('Found: ', JSON.stringify(emp));

  emp.giveRaise(0.12); // gee, thanks!

  session.update(emp, onUpdate); // session is now in scope

};

function onUpdate = function(err, emp) {

  if (err) {

    console.log(err);

    ... error handling

  }

Inserting data

Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it:

function onSession = function(err, session) {

  var data = new Employee(999, 'Mat Keep', 20000000);

  session.persist(data, onInsert);

  }

};

Autoincrement Columns

Columns allow but do not require users to specify the values for autoincrement columns. If users want to specify values for autoincrement columns, for example to reset the autoincrement value for the table, the insert function allows specification of values for these columns.

But if users want to exploit the autoincrement functionality, they must avoid setting a value for autoincrement columns. When mysql-js detects that the user has not specified a value, the next value in sequence is used. In the callback for the insert operation, mysql-js has filled in the missing values.

Default Values

Columns that specify a default value allow but do not require users to specify the values for these columns. If users want to specify values for these columns, the insert function allows specification of 'undefined' for these columns. In these cases, mysql-js will use the default values for these columns.

Deleting data

To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant.

function onSession = function(err, session) {

  var key = new Employee(999);

  session.remove(Employee, onDelete);

  }

};

More extensive queries

Queries are defined using a builder pattern and then executed with parameters that can specialize the query and control the operation of the query.

To define a query, use the createQuery function of Session. Provide a constructor function of a mapped domain object or a table name. The resulting QueryDomainType is returned in the callback. The QueryDomainType is a specialized object that has a property for each property in the user's domain object, or a property for each column in the table. These properties are of type QueryField, and they implement functions that allow you to compare column values of database rows to parameters supplied when you execute the query.

session.createQuery(Employee, function(err, qdt) {

// build and execute the query using qdt

});

To build the query, use the query domain type to filter the results. If nothing else is specified, executing the query will return all rows in the table mapped by Employee as an array of instances of Employee.

To filter the results, similar to using a WHERE clause in SQL, specify a query predicate using the where function of the query domain type. To build a query predicate, you can compare fields in the query domain type to values provided as parameters, using common comparison functions such as equal, greater than, etc. To compare fields, use the query field functions that are created in the query domain type. You can combine predicates using AND and OR functions. For example,

var salaryLowerBound = qdt.param('floor'); // define the formal parameter floor

var compareSalaryLowerBound = qdt.salary.ge(salaryLowerBound); // compare the field salary to the floor

var salaryUpperBound = qdt.param('ceiling'); // define the formal parameter ceiling

var compareSalaryUpperBound = qdt.salary.le(salaryUpperBound); // compare the field salary to the ceiling

 

var combinedComparisons = compareSalaryLowerBound.and(compareSalaryUpperBound);

qdt.where(combinedComparisons); // specify the entire filter for the query

The query api supports a fluent style of query composition. The query can be written as:

qdt.where(qdt.salary.ge(qdt.param('floor')).and(qdt.salary.le(qdt.param('ceiling'))));

The above query filter compares the salary greater or equal to parameter floor and less or equal to ceiling.

Executing Queries

Once the query has been built, you can execute the query, providing the actual parameters to be used in the query. The results of the query are returned as an array in the callback.

qdt.execute({'floor': 40000, 'ceiling': 80000}, function(err, results) {

if (err) throw Error('query failed');

results.forEach(function(result) {

console.out('Employee', result.name, 'has salary', result.salary);

});

}

How to evaluate

The MySQL Connector for JavaScript is available for download and forking from GitHub

Since we are still in the development phase, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms.

You can also learn more about other previewed functionality of MySQL Cluster 7.3 Development Milestone Release DevZone article.

Sunday Apr 14, 2013

Tap into Top-Level MySQL Performance with MySQL Training

MySQL, the most popular open source database brings you great performance. You can see the performance details in http://dimitrik.free.fr/blog/. To help you get the most from MySQL's performance, take the MySQL for Performance Tuning course.

This four day instructor-led training course covers:

  • Performance Tuning Basic
  • Performance Tuning Tools
  • MySQL Server Tuning
  • MySQL Query Cache
  • Storage Engines
  • Schema Design and Performance
  • Performance Tuning Extras

This course is available in three delivery types:

  • Training-on-Demand: Take this course at your own pace via streaming video and access to lab environments and start training within 24 hours of registering.
  • Live-Virtual Event: Follow a live event from your own desk - no travel required. Choose from a selection of events already on the schedule suiting different timezones.
  • In-Class Event: Travel to an education center to follow this course. Below is a selection of events already on the schedule.

 Location

 Date

 Delivery Language

 Mechelen, Belgium

 13 May 2013

 English

 Aix-en-Provence, France

 10 June 2013

 French

 Budapest, Hungary

 6 May 2013

 Hungarian

 Rome, Italy

 24 June 2013

 Italian

 Riga, Latvia

 11 November 2013

 Latvian

 Amsterdam, Netherlands

 10 June 2013

 Dutch

 Warsaw, Poland

 13 May 2013

 Polish

 Barcelona, Spain

 27 May 2013

 Spanish

 Madrid, Spain

 27 May 2013

 Spanish

 Tokyo, Japan

17 September 2013 

 Japanese

 Montreal, Canada

 17 June 2013

 English

 Ottawa, Canada

 17 June 2013

 English

 Toronto, Canada

 17 June 2013

 English

 Sao Paolo, Brazil

 17 June 2013

 Brazilian Portugese

 Sao Paolo, Brazil

 15 October 2013

 Brazilian Portugese

For more information about this course or other courses on the authentic MySQL curriculum, go to http://oracle.com/education/mysql

Wednesday Apr 10, 2013

MySQL Replication: Self-Healing Recovery with GTIDs and MySQL Utilities

MySQL 5.6 includes a host of enhancements to replication, enabling DevOps teams to reliably scale-out their MySQL infrastructure across commodity hardware, on-premise or in the cloud.

One of the most significant enhancements is the introduction of Global Transaction Identifiers (GTIDs) where the primary development motivation was:

- enabling seamless failover or switchover from a replication master to slave

- promoting that slave to the new master

- without manual intervention and with minimal service disruption.

You can download the new MySQL Replication High Availability Guide to learn more.  The following sections provide an overview of how GTIDs and new MySQL utilities work together to enable self-healing replication clusters.  

GTIDs

To understand the implementation and capabilities of GTIDs, we will use an example illustrated below:

- Server “A” crashes

- We need to failover to one of the slaves, promoting it to master

- The remaining server becomes a slave of that new master

Figure 1: MySQL Master Failover and Slave Promotion

As MySQL replication is asynchronous by default, servers B and C may not have both replicated and executed the same number of transactions, i.e. one may be ahead of the other. Consider the following scenarios:

Scenario #1

- Server B is ahead of C and B is chosen as the new master;

- [Then] Server C needs to start to replicate from the first transaction in server B that it is yet to receive;

Scenario #2

- Server C has executed transactions that have so far not been received by Server B, but the administrator designates Server B as the new master (for example, it is configured with higher performing hardware than Server C).

- Server B therefore needs to execute later transactions from Server C, before assuming the role of the master, otherwise lost transactions and conflicts can ensue.

GTIDs apply a unique identifier to each transaction, so it is becomes easy to track when it is executed on each slave. When the master commits a transaction, a GTID is generated which is written to the binary log prior to the transaction. The GTID and the transaction are replicated to the slave.

If the slave is configured to write changes to its own binary log, the slave ensures that the GTID and transaction are preserved and written after the transaction has been committed.

The set of GTIDs executed on each slave is exposed to the user in a new, read-only, global server variable, gtid_executed. The variable can be used in conjunction with GTID_SUBTRACT() to determine if a slave is up to date with a master, and if not, which transactions are missing.

A new replication protocol was created to make this process automatic. When a slave connects to the master, the new protocol ensures it sends the range of GTIDs that the slave has executed and committed and requests any missing transactions. The master then sends all other transactions, i.e. those that the slave has not yet executed. This is illustrated in the following example (note that the GTID numbering and binlog format is simplified for clarity):

Figure 2: Automatically Synchronizing a New Master with it's Slaves

In this example, Server B has executed all transactions before Server A crashed. Using the MySQL replication protocol, Server C will send “id1” to Server B, and then B will send “id 2” and “id3” to Server C, before then replicating new transactions as they are committed. If the roles were reversed and Server C is ahead of Server B, this same process also ensures that Server B receives any transactions it has so far not executed, before being promoted to the role of the new master.

We therefore have a foundation for reliable slave promotion, ensuring that any transactions executed on a slave are not lost in the event of an outage to the master.

MySQL Utilities for GTIDs

Implemented as Python-based command-line scripts, the Replication utilities are available as a standalone download as well as packaged with MySQL Workbench. The source code is available from the MySQL Utilities Launchpad page.

The utilities supporting failover and recovery are components of a broader suite of MySQL utilities that simplify the maintenance and administration of MySQL servers, including the provisioning and verification of replication, comparing and cloning databases, diagnostics, etc. The utilities are available under the GPLv2 license, and are extendable using a supplied library. They are designed to work with Python 2.6 and above.

Replication Utility: mysqlfailover

While providing continuous monitoring of the replication topology, mysqlfailover enables automatic or manual failover to a slave in the event of an outage to the master. Its default behavior is to promote the most viable slave, as defined by the following slave election criteria.

• The slave is running and reachable;

• GTIDs are enabled;

• The slaves replication filters do not conflict;

• The replication user exists;

• Binary logging is enabled.

Once a viable slave is elected (called the candidate), the process to retrieve all transactions active in the replication cluster is initiated. This is done by connecting the candidate slave to all of the remaining slaves thereby gathering and executing all transactions in the cluster on the candidate slave. This ensures no replicated transactions are lost, even if the candidate is not the most current slave when failover is initiated.

The election process is configurable. The administrator can use a list to nominate a specific set of candidate slaves to become the new master (e.g. because they have better performing hardware).

At pre-configured intervals, the utility will check to see if the server is alive via a ping operation, followed by a check of the connector to detect if the server is still reachable. If the master is found to be offline or unreachable, the utility will execute one of the following actions based on the value of the failover mode option, which enables the user to define failover policies:

• The auto mode tells the utility to failover to the list of specified candidates first and if none are viable, search the list of remaining slaves for a candidate.

• The elect mode limits election to the candidate slave list and if none are viable, automatic failover is aborted.

• The fail mode tells the utility to not perform failover and instead stop execution, awaiting further manual recovery actions from the DevOps team.

Via a series of Extension Points users can also bind in their own scripts to trigger failover actions beyond the database (such as Virtual IP failover). Review the mysqlfailover documentation for more detail on configuration and options of this utility.

Replication Utility: mysqlrpladmin

If a user needs to take a master offline for scheduled maintenance, mysqlrpladmin can perform a switchover to a specific slave (called the new master). When performing a switchover, the original master is locked and all slaves are allowed to catch up. Once the slaves have read all events from the original master, the original master is shutdown and control is switched to the new master.

There are many Operations teams that prefer to take failover decisions themselves, and so mysqlrpladmin provides a mechanism for manual failover after an outage to the master has been identified, either by using the health reporting provided by the utilities, or by alerting provided by a tool such as the MySQL Enterprise Monitor.

It is also possible to easily re-introduce recovered masters back to the replication topology by using the demote-master command and initiating a switchover to the recovered master.

Review the mysqlrpladmin documentation for more detail on configuration and options of the utility.

Resources to Get Started

In addition to the documentation, two additional resources are useful to enable you to learn more:

- MySQL replication utilities tutorial video

- MySQL replication guide: building a self healing replication topology

Summary

The combination of options to control failover or switchover and the ability to inform applications of the failover event are powerful features that enable self-healing for critical applications.

Tuesday Apr 09, 2013

Upcoming & Past Events with MySQL attendance

Just a reminder for 2 important MySQL events as they have already been announced:

MySQL Community Reception - “Let’s Celebrate MySQL 5.6 GA!”

  • Date: Apr 22, 2013
  • Place: TechMart Santa Clara,  5201 Great America Parkway,  Santa Clara,  CA 95054
  • See the whole announcement here;
  • More information and registration is available here.

MySQL Connect CfP is open by April 12, 2013.

  • Date: Sep 21-23, 2013
  • Place: SFO, US
  • See the whole announcement here;
  • More information and session submission can be found here.

You can find more upcoming events where either MySQL engineers or the community team is going to attend at our wikis page.

Sunday Apr 07, 2013

Learn to Develop Your New Applications on the MySQL Database

Get the most out of your MySQL environment by learning to develop your new applications cost-effectively using MySQL with different languages such as Java and PHP.

The MySQL for Developers course is a 5 day, live, instructor-led course where you learn to:

  • Use MySQL tools to connect to databases.
  • Examine database structure and contents.
  • Generate database structure diagrams.
  • Execute complex queries and stored routines.
  • Manipulate numeric, text, spatial and temporal data.
  • Execute complex subqueries and joins.

You can take this course as a:

  • Live-Virtual Event: Take this course from your own desk - no travel required. Choose from one of the many events already on the schedule to suit different timezones. Delivery is currently scheduled in English and German.
  • In-Class Event: Travel to an education center to follow this class. Below is a selection of events already on the schedule.

 Location

 Date

 Delivery Language

 Gummersbach, Germany

 8 July 2013

 German

 Hamburg, Germany

 10 June 2013

 German

 Munich, Germany

 10 June 2013

 German

 Budapest, Hungary

 15 April 2013

 Hungarian

Rome, Italy

 20 May 2013

Italian 

Turin, Italy 

 3 June 2013

 Italian

 Nieuwegein, Netherlands

 13 May 2013

 Dutch

 Warsaw, Poland

 10 June 2013

 Polish

 Lisbon, Portugal

 13 May 2013

European Portugese 

Porto, Portugal 

 13 May 2013

European Portugese 

 Bern, Switzerland

25 September 2013 

 German

 Zurick, Switzerland

 11 April 2013

 German

 Madrid, Spain

 13 May 2013

 Spanish

 Kenya

 13 May 2013

 English

 Tunis, Tunisia

 20 May 2013

 French

 Sao Paolo, Brazil

 24 April 2013

 Brazilian Portugese

For more information on this course or others in the authentic MySQL Curriculum, go to http://oracle.com/education/mysql.

Thursday Apr 04, 2013

MySQL 5.6 Replication: All That Is New, On-Demand

The new MySQL 5.6 GA release delivers a host of new capabilities to support developers releasing new services faster, with more agility, performance and security .

One of the areas with the most far-reaching set of enhancements is MySQL replication used by the largest web, mobile and social properties to horizontally scale highly-available MySQL databases across distributed clusters of low cost, commodity servers.

A new on-demand MySQL 5.6 replication webinar takes you on a guided tour through all of those enhancements, including:

- 5x higher master and slave performance with Binary Log Group Commit, Multi-Threaded Slaves and Optimized Row-based replication

- Self healing replication with Global Transaction Identifiers (GTIDs), utilities and crash-safe masters and slaves

- Replication event checksums for rock-solid data integrity across a replication cluster

- DevOps features with Time-Delayed Replication, Remote binary log backup and new CLI utilities

We had some great questions during the live session - here are a few:

Question: When multi-threaded slave is configured, how do you handle multi-database DML? 

Answer: When that happens the slave switches to single-threaded mode for that transaction. After it is applied, the slave switches back to multi-threaded mode.

Question: We have seen in past that replication would sometime break when complex queries get fired or alter statements take place. Neither row based or Statement based replication guaranteed 100% fail safe replication. Does MySQL5.6 address this issue ?

Answer: Replication can fail for several specific reasons. There is no general algorithm enabling "100% fail safe" replication. But we made sure that all that are unsafe are either converted to row based replication or issue warnings.

Question: We are looking for a HA cluster solution where most of the websites are Wordpress based. Do you have any recommendations on what kind of replication solution should we choose? The tables are using innodb

Answer: Using MySQL 5.6 with the master/slave replication we are discussing today, configured with GTIDs and using the mysqlfailover utility will provide you High Availability for your Wordpress cluster and InnoDB tables. Nothing you need to add - it is all part of MySQL you download today

Question: I have question about data loss and translating my-bin.000132 to my-bin.00101 when a master fails?

Answer: That's exactly the problem GTIDs address. Without GITDs it's hard to know the correct position on my-bin.000101 that corresponds to my-bin.000132.

With GTIDs we don't need to know positions, only what the slave has already applied, knowing that it's simple to send to the new master any missing transactions that are on other slaves. The data that was already on my-bin.00132 can be ensured that it was sent to a slave using Semi-syncronous replication.

Question: How do the new utilities work with a Pacemaker/Corosync setup?

Answer: The utilities integrate with Pacemaker/Corosync via extension points. The MySQL Pacemaker agent can call rpladmin to perform a switchover.

Question: Where can I learn more about supported HA solutions for MySQL?

Answer: From the Guide to MySQL HA solutions. I hope you find this useful

Question: Are there any plans to move replication from host-based to either 'database' or 'table' based?

Answer: Current MySQL replication supports replication of specific databases or tables from a Host using Replication filters. This means, we support Replication of a host (all data which could be replicated), replication of specific databases per host and specific tables per host. (You can ignore databases and tables using filters too).

For these and other questions, along with the full replay of the webinar, register here


About

Get the latest updates on products, technology, news, events, webcasts, customers and more.

Twitter


Facebook

Search

Archives
« April 2013 »
SunMonTueWedThuFriSat
 
1
3
5
6
8
11
12
13
15
16
19
20
21
23
24
27
28
29
    
       
Today