Thursday May 21, 2009

OpenSolaris beats Linux on Memcached !

Following on the heels of our memcached performance tests on SunFire X2270 ( Sun's Nehalem-based server) running OpenSolaris, we ran the same tests on the same server but this time on RHEL5. As mentioned in the post presenting the first memcached results, a 10GBE Intel Oplin card was used in order to achieve the high throughput rates possible with these servers. It turned out that using this card on linux involved a bit of work resulting in driver and kernel re-builds.

  • With the default ixgbe driver from the RedHat distribution (version 1.3.30-k2 on kernel 2.6.18)), the interface simply hung during the benchmark test.
  • This led to downloading the driver from the Intel site (1.3.56.11-2-NAPI) and re-compiling it. This version does work and we got a maximum throughput of 232K operations/sec on the same linux kernel (2.6.18). However, this version of the kernel does not have support for multiple rings. 
  • The kernel version 2.6.29 includes support for multiple rings but still doesn't have the latest ixgbe driver which is 1.3.56-2-NAPI. So we downloaded, built and installed these versions of the kernel and driver. This worked well giving a maximum throughput of 280K with some tuning.

Results Comparison

The system running OpenSolaris and memcached 1.3.2 gave us a maximum throughput of 350K ops/sec as previously reported. The same system running RHEL5 (with kernel 2.6.29) and the same version of memcached resulted in 280K ops/sec. OpenSolaris outperforms Linux by 25% !

Linux Tuning

The following Linux tunables were changed to try and get the best performance:

net.ipv4.tcp_timestamps = 0
  net.core.wmem_default = 67108864
  net.core.wmem_max = 67108864
  net.core.optmem_max = 67108864
  net.ipv4.tcp_dsack = 0
  net.ipv4.tcp_sack = 0
  net.ipv4.tcp_window_scaling = 0
  net.core.netdev_max_backlog = 300000
  net.ipv4.tcp_max_syn_backlog = 200000
  

Here are the ixgbe specific settings that were used (2 transmit, 2 receive rings):

RSS=2,2 InterruptThrottleRate =1600,1600

OpenSolaris Tuning

The following settings in /etc/system were used to set the number of MSIX:

set ddi_msix_alloc_limit=4
set pcplusmp:apic_intr_policy=1

For the ixgbe interface, 4 transmit and 4 receive rings gave the best performance :

tx_queue_number=4, rx_queue_number=4

Finally, we bound the crossbow threads:

dladm set-linkprop -p cpus=12,13,14,15 ixgbe0

Monday May 11, 2009

Olio implemented in Java

The first cut of a Java EE implementation of Olio is now checked into the repository. The file docs/java_setup.html gives instructions on how to build and setup this implementation. The implementation uses JSP, servlets, JPA for persistence, yahoo and Jmaki widgets for AJAX etc. The web application is located in webapp/java/trunk and the load driver, database and file loaders etc. are in workload/java/trunk.

Check it out.

Monday Apr 27, 2009

Multi-instance memcached performance

As promised, here are more results running memcached on Sun's X2270 (Nehalem-based server). In my previous post, I mentioned that we got 350K ops/sec running a single instance of memcached at which point the throughput was hampered by the scalability issues of memcached. So we ran two instances of memcached on the same server, each using 15GB of memory and tested both 1.2.5 and 1.3.2 versions. Here are the results :

2 instances performance

The maximum throughput was 470K ops/sec using 4 threads in memcached 1.3.2. Performance of 1.2.5 was just very slightly lower. At this throughput, the network capacity of the single 10gbe card was reached as the benchmark does a lot of small packet transfers. See my earlier post for a description of the server configuration and the benchmark. At the maximum throughput, the cpu was still only 62% utilized (73% in the case of 1.2.5). Note that with a single instance we were using the same amount of cpu but reaching a lower throughput rate which once again points to memcached scalability issues.

These are really exciting results. Stay tuned - there is more exciting information coming.

Saturday Apr 18, 2009

Memcached Performance on Sun's Nehalem System

Memcached is the de-facto distributed caching server used to scale many web2.0 sites today. With the requirement to support a very large number of users as sites grow, memcached aids scalability by effectively cutting down on MySQL traffic and improving response times.

Memcached is a very light-weight server but is known not to scale beyond 4-6 threads. Some scalability improvements have gone into the 1.3 release (still in beta). With the new Intel Nehalem based systems improved hyper-threading providing twice as much performance as current systems, we were curious to see how memcached would perform on these systems. So we ran some tests, the results of which are shown below :




memcached 1.3.2 does scale slightly better than 1.2.5 after 4 threads. However, both versions reach their peak at 8 threads with 1.3.2 giving about 14% better throughput at 352,190 operations/sec.

The improvements made to per-thread stats certainly have helped as we no longer see stats_lock at the top of the profile. That honor now goes to cache_lock. With the increased performance of new systems making 350K ops/sec possible, breaking up of this (and other) lock(s) in memcached is necessary to improve scalability.

Test Details

A single instance of memcached was run on a SunFire X2270 (2 socket Nehalem) with 48GB of memory and an Oplin 10G card. Several external client systems were used to drive load against the server using an internally developed Memcached benchmark. More on the benchmark later.
The clients connected to the server using a single 10 Gigabit Ethernet link. At the maximum throughput of 350K, the network was about 52% utilized and the server was 62% utilized. So there is plenty of head-room on this system to handle a much higher load if memcached could scale better. Of course, it is possible to run multiple instances of memcached to get better performance and better utilize the system resources and we plan to do that next. It is important to note that utilizing these high performance systems effectively for memcached will require the use of 10 GBE interfaces.

Benchmark Details

The Memcached benchmark we ran is based on Apache Olio - a web2.0 workload. I recently showcased results from Olio on Nehalem systems as well. Since Olio is a complex multi-tier workload, we extracted the memcached part to more easily test it in a stand-alone environment. This gave rise to our Memcached benchmark.

The benchmark initially populates the server cache with objects of different sizes to simulate the types of data that real sites typically store in memcached :

  • small objects (4-100 bytes) to represent locks and query results
  • medium objects (1-2 KBytes) to represent thumbnails, database rows, resultsets
  • large objects (5-20 KBytes) to represent whole or partially generated pages

The benchmark then runs a mixture of operations (90% gets, 10% sets) and measures the throughput and response times when the system reaches steady-state. The workload is implemented using Faban, an open-source benchmark development framework. It not only speeds benchmark development, but the Faban harness is a great way to queue, monitor and archive runs for analysis.

Stay tuned for further results.

Tuesday Apr 14, 2009

Scaling Olio on Sun's Nehalem Systems and Amber Road

I introduced Olio a little while ago as a toolkit to help web developers and deployers as well as performance/operations engineers. Olio includes a web2.0 application as well as the necessary software required to drive load against it. Today, we are showcasing the first major deployment of Olio on Sun's newest Intel Nehalem based systems - the SunFire X2270 and the SunFire X4270. We tested 10,000 concurrent users (with a database of 1 million users) using over 1TB of storage in the unstructured object store.

The diagram below shows the configuration we tested.


The Olio/PHP web application was deployed on two X2270 systems. Since these systems are wickedly fast, we also chose to run memcached on them. This eliminates the need of having a separate memcached tier. The structured data in Olio resides in MySQL. For this deployment, the database used MySQL Replication and was deployed using one Master node and 2 slave nodes - all nodes were X4270 systems. The databases were created on ZFS on the internal drives on these systems. The unstructured data resides on a regular filesystem created on the NAS Appliance AmberRoad - Sun Storage 7210.

I think this is a great solution for web2.0 applications - the new Nehalem servers are extremely powerful allowing you to run a lot of users on each server, resulting in a smaller footprint and easier deployment and maintenance. Of course, this requires a little more effort in terms of tuning the software stack to ensure it can scale and utilize the CPU effectively.

The entire configuration, tuning informantion and performance results is documented in details in a Sun Blueprints titled A Web2.0 Deployment on OpenSolaris and Sun Systems. So check it out and let me know if you have any questions or comments.

Friday Mar 20, 2009

FreeImage and ImageScience on OpenSolaris

Although rails is a great development environment for web applications, for a newbie the deployment of a rails application can be challenging due to the myriad dependencies on various gems, native libraries etc.

image_science is one such ruby library that provides an easy way to generate thumbnails. It is therefore quite popular in web2.0 type applications (there isn't a site today that doesn't let you upload photographs of yourself, your pets, gadgets, whatever).  It is a very simple implementation and available as a ruby gem and so easy to install. However, the real work is done by a native library called FreeImage and installing this on OpenSolaris is a little bit of work. Although, I use OpenSolaris here, the instructions apply to Solaris 10 as well if you are using ruby from Web Stack.


FreeImage


I found instructions from Joyent to build FreeImage on OpenSolaris but found them to be erroneous. To install FreeImage, do the following :


  • Download the source from the repository using the command :
  • cvs -z3 -d:pserver:anonymous-AT-freeimage.cvs.sourceforge-DOT-net:/cvsroot/freeimage co -P FreeImage
  • Edit FreeImage/Makefile.solaris :

    • Change INSTALLDIR to /opt/local

    • Change all the lines for the install target as follows:
    • install:
              install -m 644 -u root -g root -f $(INSTALLDIR)/include Source/FreeImage.h
              install -m 644 -u root -g root -f $(INSTALLDIR)/lib $(STATICLIB)
              install -m 755 -u root -g root -f $(INSTALLDIR)/lib $(SHAREDLIB)
              ln -sf $(INSTALLDIR)/lib/$(SHAREDLIB) $(INSTALLDIR)/lib/$(LIBNAME)



  • Make all the install directories:
  • mkdir -p /opt/local/lib /opt/local/include
  • Ensure you have gcc in your PATH (it's in /usr/sfw/bin).

  • Now we are ready to build the library:
  • gmake -f Makefile.solaris
    gmake -f Makefile.solaris install

If everything went smoothly, you should see the following files in /opt/local:
# ls /opt/local/include
FreeImage.h
# ls -l /opt/local/lib
total 13538
-rwxr-xr-x   1 root     root     2978480 Mar 17 13:35 libfreeimage-3.12.0.so
-rw-r--r--   1 root     root     3929756 Mar 17 13:35 libfreeimage.a
lrwxrwxrwx   1 root     root          22 Mar 17 13:43 libfreeimage.so.3 -> libfreeimage-3.12.0.so

ImageScience


Now that we have FreeImage installed, installing ImageScience itself is real easy. But first, make sure you have the latest version of rubygems (1.3.1). The default rubygems in OpenSolaris 2008.11 is 0.9.4.
# gem --version
0.9.4
# bash-3.2# gem install rubygems-update
Bulk updating Gem source index for: http://gems.rubyforge.org
Successfully installed rubygems-update-1.3.1
# update_rubygems


This will print a lot of messages, but when it's complete you should have rubygems 1.3.1.
# gem --version
 1.3.1


We can now install the image_science gem. This will automatically install all dependent gems so the messages you see depends on what you have installed. On a OpenSolaris 2008.11 system you should see :
bash-3.2# gem install image_science
Successfully installed rubyforge-1.0.3
Successfully installed rake-0.8.4
Successfully installed hoe-1.11.0
Successfully installed ZenTest-4.0.0
Successfully installed RubyInline-3.8.1
Successfully installed image_science-1.1.3
6 gems installed


This will be followed by more messages indicating documentation for the above modules was also installed.

You are now ready to use image_science. Have fun !

First Olio Release

We have just released the first binary version of Apache Olio for both the PHP and Rails implementation. Both implementations have been tested quite thoroughly now and we think they are robust enough for serious use - especially for performance testing the workloads.


I introduced Olio in a previous post. It is a toolkit that includes a sample web2.0 application implemented in both PHP and Rails that includes a load generator to drive load against the application.


Please visit the Olio site and download the kits. If you find it interesting, I invite you to come join the project.

Wednesday Nov 12, 2008

Introducing Olio

For the last few months, I've been working feverishly to get the web2.0kit open-sourced. What is the web2.0kit, you ask ? We introduced this at our session at the Velocity conference. The web2.0kit is a reference architecture to help anyone running a web application evaluate the suitability, functionality and performance of various web technologies.

Most web2.0 sites today use open source languages and frameworks such as PHP, Ruby on Rails and Java EE to develop their applications. Deployments of these applications also use popular open source servers such as Apache httpd, MySQL, memcached and glassfish. Many other servers/technologies such as lighttpd, nginx, mogileFS, mongrel, thin, JRuby are also gaining popularity. To help understand the differences in these various web technology stacks and infrastructure applications, we developed the web2.0kit which has now been open-sourced and is an Apache Incubator project called Olio .

I view Olio as a tool to aid developers, deployers as well as performance engineers. For developers, Olio provides 3 different implementations of the exact same application using three different languages and their associated frameworks - PHP, Java EE and Rails. (At this time, the Java EE version is still not in the Olio repository but will soon be). Developers can browse the source code and understand how to design and code a complex web2.0 application. Even experienced PHP developers may gain by looking at the Olio PHP application as we've tried to design the application using object-oriented principles and well-known design patterns - typically not seen much in the PHP world ! In fact, a couple of fairly large companies in China are already using Olio as a training tool for their new hires/interns. If you've been considering rails but have been hesitant, here's your chance to check out a full-blown app and see what it will take to develop yours.

Developers can also experiment with different pieces of technology by modifying Olio. For example, Olio/PHP provides both ODBC and PDO interfaces to access the database. You can easily add a mysqli one if you want to test that. Similarly, you can test the various methods of caching that is possible in a rails application.

When it comes to deployment, there are a myriad hardware and software solutions to choose from. Should you choose apache or lighttpd ? And what about nginx ? You can use Olio to performance test various solutions and determine for yourself how these servers will behave in your installation; for unlike most test results you see out there where someone runs a toy test comparing apache and lighttpd using http_load or worse ab (see my blog post on the problems with these tools), Olio lets you test a real web2.0 like site with features that are common today including ajax, tags, comments, mashups etc.

We, in the Performance and Applications Engineering (PAE) group at Sun have decades of experience developing and tuning workloads. We have brought that experience to bear in developing Olio and we hope that you can benefit from it too.

Please see the instructions on the Olio website for downloading source code. To make it easier for users to get started, we have made available pre-built binary packages from the Sun Download Center.

Please join the Olio user list to get help with setting up Olio or to make any suggestions.

Wednesday Oct 08, 2008

Cool Stack Roadmap

We released Cool Stack 1.3.1 recently. This was the last release that was built to work on releases as old as Solaris 10 01/06 (Update 1). Going forward, future versions of the stack will only be supported on newer Solaris 10 updates. So I'd like to urge everyone who is running older releases to please schedule their systems for upgrade. We highly recommend that you upgrade to at least Solaris 10 01/08 (Update 5) as it has many performance, security and other fixes.

For some time now, we've had two different stacks for Solaris 10 and OpenSolaris - Cool Stack for Solaris 10 and Web Stack for OpenSolaris. We cannot continue to sustain this model  where we have two different source bases with slightly different versions of various components. Further, there has been interest from some large customers for a single stack to be supported on Solaris 10, OpenSolaris as well as Linux. These customers also wanted production support for the stack.

To enable a unified stack, we are now transitioning Cool Stack to Sun Web Stack. This new stack was announced in OSCON in July. It will be very similar to Cool Stack in that it will be separately downloadable but it will be a full-fledged product in that customers who want production support, can now purchase it. The stack will continue to be available free of charge with limited support via a forum.

The first version of Sun Web Stack will be 1.4 (keeping the Cool Stack versioning in place) and should be available in November. Current Cool Stack 1.3.x customers may not see new functionality in terms of upgrades to components in this release as we are trying to sync it to the current versions of components in OpenSolaris. However, Zones users may want to take a look at this release as it does separate out the binaries from the configuration/log files, making it easier for multiple zones to share a single installation in /opt and yet use custom configuration/logs which will be located in /etc, /var etc. similar to other Solaris applications.

With this transition to Sun Web Stack, I will no longer be directly associated with the development of the stack. I will however keep an eye on it's performance and continue to push for increased performance via compiler optimizations or required source changes. We in the performance team will continue to use, test and analyze the performance of many of the components in the stack. So hopefully you will continue to monitor my blog as I talk more about the performance aspect of the stack.

Please do continue to use Cool Stack and Sun Web Stack as that product comes out. Sun is committed to a high-performing web stack of open source components on all platforms.

Thursday Jun 26, 2008

Velocity 2008

I just got back from 3 days of conferences - 2 days at Velocity and one at Structure '08. The Velocity Conference was billed as the 1st conference devoted exclusively to web performance and operations. And the sessions did live up to this. They had over 700 attendees which is not bad for the first time.


Being a performance person, I chose to mostly attend the performance sessions. What I found was that  the sessions were heavily geared towards the client-side. There were sessions on how to tune your javascript, images, reduce network traffic etc. - all trying to reduce the end-user response time.


Our session was of course on tuning the server-side. There was another one on squid/varnish and mysql sharding - but beyond that, client was King.


Since I'm personally focused on the server-side,  this got me thinking. It may indeed be true that the bulk of the response time seen by the user is dominated by javascript, images, ad rendering, network etc., but it shouldn't lead one to think that by fixing these issues, the server-side will magically perform better.  If anything, it will get worse (due to increased load now possible from a more-efficient client). If the server-side doesn't scale, no matter how much you tune your client-side, your application will not scale and/or perform.


One thing to note is that the people giving these talks on client-side are from the likes of yahoo, google, Microsoft. They have obviously finished with the server-side tuning and then had an ah-ah moment when after all that tuning, they realized that the end-user response time was still bad, and so have shifted their focus to client-side issues.


But if you're a startup or have just started deploying your application, I think the server-side should still remain the first area of focus. If your application isn't designed right or the web stack/server stack isn't tuned, you will not get far. As an example, running apache on Linux without any tuning will barely support a couple of hundred connections. When you get to the point where you're comfortable that the server stack can handle load and scale reasonably, then yes, by all means shift your focus to the client-side.


Ideally, you would attack both at the same time - but probably noone has the resources to focus on everything at the same time.


So, if server-side issues are still plaguing you, please feel free to check out our presentation.

Thursday Jun 12, 2008

Performance talk at Velocity

As I indicated in my previous post on MySQL performance, we have been doing some performance work using an internally developed web2.0 application. Akara and I will be presenting this app publicly to a large audience for the first time at the upcoming Velocity Conference in Burlingame, CA on June 23, 24. Check out our abstract.  Most of our work uses Cool Stack so a lot of the results we will be presenting will be based on that. If you're struggling with performance issues, this conference may be worth checking out.
If you will be attending the conference, please stop by and say hello. It's always good to see people whom we only know through blogs and forums.

Wednesday Jun 11, 2008

MySQL Performance Comparison

We run an internally developed AMP benchmark to understand the performance characteristics of the AMP stack on various platforms.


We did some comparisons of the AMP stack in Cool Stack 1.2 with the one in Cool Stack 1.3. We didn't see any signficant difference in performance in the Apache/PHP tier between the two versions. However, there is a dramatic improvement in performance in the MySQL tier using Cool Stack 1.3. Cool Stack 1.2 shipped with MySQL 5.0.45 and 1.3 now has MySQL 5.1.24.


The graph below shows the CPU utilization on the MySQL system at different load levels :



MySQL 5.1 is 4x more efficient than 5.0.45 ! There are improvements in the Query Cache and query optimization in MySQL 5.1, but at this point we are doing more analysis to understand the reasons for the dramatic performance increase. Here is another graph that shows the network throughput at different load levels :



It appears that some of that efficiency comes from driving much less load on the network as well. We know that the client API has changed in 5.1, but it is amazing how much of a difference it makes in terms of performance.


 So, the bottom-line is : Upgrade to MySQL 5.1 in  Cool Stack 1.3 and share your experiences. I'm very interested in finding out what others see in terms of performance.






Cool Stack 1.3 is here

It's finally here. Before you download the release, please do read the documentation. It's over a month late, but hopefully the features will compensate for the delay.
Almost every single component in the stack has been upgraded. Two new packages, CSKpython and CSKnginx have been added. And yes - the python package does include mod_python too.
You can read about the full list of changes in the Changelog.


ruby


Significant changes have been made to the ruby package. In addition to the upgrade to ruby1.8.6p114, we include a number of gems, notably mysql, postgres and mongrel. It should also be easy to install any new gem that requires native compilation using gcc - this should just work out of the box. A word of caution if you're running ruby on SPARC. Please DO NOT use gcc. If you're importing native gems, take the extra step of installing Studio 12 and use the rbconfig.studio instead (simply rename it to rbconfig.rb after saving the current one). This file is located in /opt/coolstack/lib/ruby/1.8/sparc-solaris2.10.


There have also been significant performance enhancements for ruby made in this release. By changing compiler flags, we have measured 20-30% improvement. In addition, SPARC-specific patches (which are now in ruby 1.8.7) improve performance a further 8-15%. If you are a ruby user, let us know your experiences. I'd love to hear of any performance changes you see as well.
In a future post, I'll share some performance results on the AMP stack. But for now, please try this release and let us know what you think via the forum or comments on this blog.


Cool Stack installation in Zones


For users who have installed Cool Stack in zones, I know the upgrade is a painful process. As I mentioned earlier, we do hope to fix this situation by allowing upgrade installs in future. But some users have found solutions on their own. In particular, I found this post in the forum interesting. Scroll down to reply 19.


Monday May 19, 2008

Where is 1.3?

I know many of you may be wondering why Cool Stack 1.3 isn't out yet, so I thought I'd post an update on where we are with the release. When I first started talking about this, I was hoping to get the release out in early May. But several things have colluded to cause a delay.

  1. We took on a lot for this release. Although just looking at the proposal, it may seem like a simple matter to just update the versions of the various components, underneath the hood, we decided to do a lot more streamlining of the build and packaging process.  Since so many version updates were involved, that also meant more legal approvals.
  2. We have a more formal QA process for this release (and for future releases). The good news is that this means we will have a more quality release, but the bad news is that it now takes more time as we have to go through a couple of QA cycles at least to qualify the release.
  3. We are going to institute a patching process for releases going forward. What this means is that we can put out patches for critical bugs and provide an upgrade path for existing installations. This should especially prove useful to users who have installed in zones. Before everyone starts jumping up with joy, I want to mention that for 1.3, you will still have to go through the pain of a fresh installation after saving your config files and re-applying them. But going forward, say for 1.3.1 or 1.4, we will provide an upgrade option, so please bear with the pain for this one release.
  4. And there is now talk about making Cool Stack into a more full-fledged product with proper production support. This is one of the major reasons for all of the above enhancements and the cause of the delay, but we have got so many requests for production support now, that we felt it was worth taking the time to do some ground work to make this happen.
  5. Just to be clear, we will continue to provide free support through the Cooltools forum as we are doing now, and we hope more of you will help us out on the forum. We know there are many experienced Cool Stack users out there - if you think Cool Stack has saved you time, we hope you will consider giving back a little of your time to the forum. We have one full-time engineer assigned to the project and a few other Sun engineers who are interested in Cool Stack's success contribute our time.  We are really short-staffed, so please, please do consider sharing your knowledge and helping out other Cool Stack users.

After reading all that, I know the big question still is : When will 1.3 be released ? We have just finished incorporating almost all components and are starting QA of build2. We have a few more things to take care of but I'm hopeful that we can get this out in the first week of June.

Thanks for your patience and your support of Cool Stack. Please do keep your feedback coming and subscribe to the forum. 


Saturday Apr 26, 2008

Microformats and Tags

I talked about Microformats in a post last year on web20expo. It appears that the technology is now going main stream. I attended a workshop on Web2.0 Best Practices at the Web20 Expo this week in which the speaker, Niall Kennedy expounded on th advantages of using microformats. He said he's seen a significant growth in traffic on his site since he started doing so since search engine results show direct links to pages on his site.
Yahoo is adding microformats to many of their properties. The yahoo event site already has them. This is exciting since microformats are a bridge to the semantic web, which we've been talking about for several years now. However, the talk has never seemed to materialize into anything concrete. Meanwhile, the web2.0 world has decided to do things their own way.

A classic example is tagging. While the semantic folks talk about taxonomies and ontologies, the web guys invented folksonomies (aka tagging). Tagging has allowed users and sites to group stuff together, attaching semantic meaning to their data. Tag clouds have worked fairly well and sites like flickr are extending the concept by automatically creating even more tags ! The problem with tags of course is that a word can have several meanings and it's not easy to figure out which exact interpretation should be used. This problem is what RDF solves nicely, but more on that later.

Microformats are better than tags in the sense that they have a more rigid format and as such provide better semantics, although not perfect. Let's look at an example:

<div class="vevent"><br />  <span class="summary">JavaOne Conference</span>: <br />  <span class="description">The premier java conference</span><br />  <p><a class="url"><a href="http://java.sun.com/javaone/sf">http://java.sun.com/javaone/sf</a><br />  <p><abbr class="dtstart" title="2008-05-06">May 6</abbr>-<br />  <abbr class="dtend" title="2008-05-09">9</abbr>,<br /> at the <span class="location">Moscone Center, San Francisco, CA</span><br /> </div>
which will display as :


JavaOne Conference:
The premier java conference

http://java.sun.com/javaone/sf

May 6-
9,
at the Moscone Center, San Francisco, CA


The advantage of such a format is that it clearly specifies various properties associated with the event: summary, description, url, start and end dates, location etc. However, it can still be ambiguous since it uses literals for many properites e.g. the location. If someone specified the location simply as "San Francisco", it could mean any of 27 different San Francisco's.

If we take this formalizing a step further, we reach the world of RDF. Here every entry is specified as a tuple of the form: <subject><predicate><object> using URIs to represent the objects in an unambiguous manner. Without going into the syntactic details, we could specify a location to be defined in the standard format of: number, street, city, state, country, zip. This provides an object with identity, the property that uniquely identifies it.

I'll talk more about RDF and semantic web in another post.
About

I'm a Senior Staff Engineer in the Performance & Applications Engineering Group (PAE). This blog focuses on tips to build, configure, tune and measure performance of popular open source web applications on Solaris.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today