Saturday Apr 26, 2008

Microformats and Tags

I talked about Microformats in a post last year on web20expo. It appears that the technology is now going main stream. I attended a workshop on Web2.0 Best Practices at the Web20 Expo this week in which the speaker, Niall Kennedy expounded on th advantages of using microformats. He said he's seen a significant growth in traffic on his site since he started doing so since search engine results show direct links to pages on his site.
Yahoo is adding microformats to many of their properties. The yahoo event site already has them. This is exciting since microformats are a bridge to the semantic web, which we've been talking about for several years now. However, the talk has never seemed to materialize into anything concrete. Meanwhile, the web2.0 world has decided to do things their own way.

A classic example is tagging. While the semantic folks talk about taxonomies and ontologies, the web guys invented folksonomies (aka tagging). Tagging has allowed users and sites to group stuff together, attaching semantic meaning to their data. Tag clouds have worked fairly well and sites like flickr are extending the concept by automatically creating even more tags ! The problem with tags of course is that a word can have several meanings and it's not easy to figure out which exact interpretation should be used. This problem is what RDF solves nicely, but more on that later.

Microformats are better than tags in the sense that they have a more rigid format and as such provide better semantics, although not perfect. Let's look at an example:

<div class="vevent"><br />  <span class="summary">JavaOne Conference</span>: <br />  <span class="description">The premier java conference</span><br />  <p><a class="url"><a href="http://java.sun.com/javaone/sf">http://java.sun.com/javaone/sf</a><br />  <p><abbr class="dtstart" title="2008-05-06">May 6</abbr>-<br />  <abbr class="dtend" title="2008-05-09">9</abbr>,<br /> at the <span class="location">Moscone Center, San Francisco, CA</span><br /> </div>
which will display as :


JavaOne Conference:
The premier java conference

http://java.sun.com/javaone/sf

May 6-
9,
at the Moscone Center, San Francisco, CA


The advantage of such a format is that it clearly specifies various properties associated with the event: summary, description, url, start and end dates, location etc. However, it can still be ambiguous since it uses literals for many properites e.g. the location. If someone specified the location simply as "San Francisco", it could mean any of 27 different San Francisco's.

If we take this formalizing a step further, we reach the world of RDF. Here every entry is specified as a tuple of the form: <subject><predicate><object> using URIs to represent the objects in an unambiguous manner. Without going into the syntactic details, we could specify a location to be defined in the standard format of: number, street, city, state, country, zip. This provides an object with identity, the property that uniquely identifies it.

I'll talk more about RDF and semantic web in another post.

Wednesday Apr 23, 2008

Micro-blogging is here

I am attending the Web2.0 Expo at San Francisco this week. Today was the first day of the conference and the crowds seemed to be larger than last year. The primary focus seems to be on social networking this year. 

I'll blog more about other aspects of the conference, but I wanted to focus this post on the twitter phenomenon. I'd heard of twitter of course, but I just could never figure out what it was all about. What was the big deal about telling the world what you were doing every second ? Who would even care ?

I attended a panel titled  "Short attention span theater: The birth of micro-blogging and micro-media". It was mediated by Gregarious Narain (he turned out not to be all that gregarious) and included Jeremiah Owyang (Forrester Research), Stowe Boyd (consultant) and Brian Solis. At the start of the session, Greg asked how many people used twitter, and almost everyone raised their hands (not me though !) He then asked the audience to twitter posts to micromedia2 (expressly set up for the purposes of this session). Within minutes, the posts started rolling in. In a few minutes, my head was swimming with the twitter vocabulary: twhirl, tweet, tweetscan, twitpitch ...

Just when I was beginning to once again tell myself "I just don't get it", another guy in the audience voiced exactly what was on my mind !  To which, Stowe Boyd replied that if you don't get it, forget about it. It's not something that can be easily explained. He went on to suggest that the questioner give it a try for a few weeks and perhaps he'll get it then.

Fair enough. So I'm going to try it. I got myself an account on twitter in the name of shantiS. If you're into twittering, go ahead and twitter (or is that tweet) me. When a noun becomes a verb, it means a product has arrived.

Happy twittering.


                                                                                                                                        

Monday Nov 05, 2007

PHP dtrace extension for Cool Stack 1.2

We have fixed the issue with the PHP dtrace extension not working in Cool Stack 1.2. As I mentioned in my announcement post, we had already identified the issue but didn't have time to fix it before the release. The issue was that /usr/ccs/bin/ld was being used to do the linking but this doesn't work for dtrace as some initialization code needs to be called from the .init section and this is not setup correctly if we don't use 'cc' to do the linking (rather than 'ld'). 

So a simple addition to the configure line:
LD="cc"

did the trick.

We now have two files : dtrace_1.2_sparc.so and dtrace_1.2_x86.so posted. Download the one for your machine and do the following :

  • Copy it to the php5 extensions directory of /opt/coolstack/php5/lib/php/extensions/no-debug-non-zts-20060613 and re-name it as dtrace.so.
  • Add extension="dtrace.so" to your /opt/coolstack/php5/lib/php.ini. 

See my earlier post, for an example of how to use dtrace to trace through the AMP stack. 

Friday Nov 02, 2007

Cool Stack 1.2 Released

Finally, it's here. We missed our October 31 deadline, but hopefully that won't matter when you see the contents of the release.

Short summary of changes in the 1.2 release.

You can download the new packages from the Sun Download Center.

Some caveats to be aware of :

  • These packages will over-write Cool Stack 1.1 packages that you may already have installed. All packages continue to install in /opt/coolstack but some of the package names have been changed. Thus, pkgadd(1M) will not know that the new CSKapache2 package will install in /opt/coolstack/apache2 and over-write the contents from a previous Cool Stack 1.1 installation of CSKamp. So, please do save your current installation. A detailed strategy to do this is  defined on the Cool Stack site. Here are some other short-cuts :
    • Move /opt/coolstack to /opt/coolstack1.1. Remove all the CSK\* packages installed. Since the actual files in /opt/coolstack no longer exist (as you've moved the directory), pkgrm will do no harm but does change the system's perception of what packages are installed. Then delete /opt/coolstack and start installing the new packages.
    • Save just the files you need - typically apache2/conf directory, your php.ini file etc. Then remove all the CSK\* packages and delete /opt/coolstack.
  • This release was built and tested on Solaris 10. During final testing on Nevada (aka OpenSolaris or SXDE), we ran into package incompatibility issues. Some packages do not install in a straight-foward manner on OpenSolaris. Please use the work-around.
  • The php dtrace extension doesn't work. We have identified the fix and will post a patch soon.

As always, please do read the FAQ (we will be updating this as we get feedback about the new release) and if that doesn't help, post your question/problem on the CoolTools Forum.

Let us know what you think of this release via the forum, the feedback alias or this blog. If we don't get feedback, it is hard to know what components to include or what bugs need to be fixed.

Tuesday Oct 16, 2007

Cool Stack on Niagara 2 Systems


Sun recently announced the Sun SPARC Enterprise T5120 and T5220 and Sun Blade T6320 systems based on the UltraSPARC T2 processor.  You can find lots of information on the various features and functionality provided by these servers.

One other cool feature is that these systems ship with Cool Stack pre-loaded. The Cool Stack 1.1 packages are available in /var/spool/pkg. You can install the ones you want using the pkgadd(1M) command as follows :

root@wgs40-82 # cd /var/spool/pkg
root@wgs40-82 # ls
CSKampSrc_sparc.pkg        CSKphplibs_sparc.pkg
CSKamp_sparc.pkg           CSKrubySrc_sparc.pkg
CSKmemcachedSrc_sparc.pkg  CSKruby_sparc.pkg
CSKmemcached_sparc.pkg     CSKsquidSrc_sparc.pkg
CSKmysqlSrc_sparc.pkg      CSKsquid_sparc.pkg
CSKmysql_sparc.pkg         CSKtdsSrc_sparc.pkg
CSKncursesSrc_sparc.pkg    CSKtds_sparc.pkg
CSKncurses_sparc.pkg       CSKtomcat_sparc.pkg
CSKperlSrc_sparc.pkg       Changelog.txt
CSKperl_sparc.pkg          CoolStack1.1_sparc.pdf
CSKphplibsSrc_sparc.pkg    LICENSE.txt
root@wgs40-82 # pkgadd -d CSKamp_sparc.pkg

The following packages are available:
  1  CSKamp     Apache httpd, PHP and MySQL
                (sparc) Apache 2.2.3, PHP 5.2.0, MySQL 5.0.33

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:


Monday Aug 06, 2007

fhb - Http Load Generator

One of the simplest performance and load tests that can be performed on any web server is to measure the response time of a static HTTP GET request. If you measure how this response time degrades as more parallel connections issue the same request, you can get a basic understanding of the performance and scalability of a web server.

Anyone who's worked with Apache long enough is probably familiar with ab, the  load generator/benchmarking tool which is so wide-spread. This is a sad state of affairs as 'ab' is extremely flawed. I don't want to repeat it's flaws here, as Scott Oaks has done an excellent job of summarizing them. 

Another tool that is also popular is http_load, but this one has its drawbacks as well. Although it is better than ab in that it does maintain multiple parallel connections, the driver is not multi-threaded so it doesn't exactly mimic a multi-user load, leading to inaccuracies in response time measurement. Further it handles connections are handled via select(3c), which has known scalability issues.

Enter the Faban Http Bench (fhb) command line tool. This tool is part of Faban, an open source benchmarking kit that lets you develop and run complex benchmarks. For simple http load testing as described above, Faban now includes a simple tool called fhb.

Fhb generates load by instantiating multiple client threads, each thread runs independently and maintains it's own statistics. The response time data collected is highly accurate. The entire infrastructure is written in Java, allowing it to run on any platform (although the wrapper tool fhb is a shell script).

fhb Usage

1. Download fhb (you only need the client tar) and do the following :
    gzip -cd faban-client-062907.tar.gz | tar xvf -
   You should find fhb in the faban/bin directory.

2. Set JAVA_HOME appropriately (JDK 1.5 is required. /usr/java should work fine but double-check to make sure it's 1.5)

3. Set your PATH to include $JAVA_HOME/bin

4. Start your web server (ideally on another machine) and have a static file of whatever size you want to test. Let's assume that the webserver is running on 10.10.18.111:80 and a file called test.html exists in its docs directory.

5. Run fhb from your client machine's faban directory as follows :

   # bin/fhb -J -Xmx1500m -J -Xms1500m -s -r 60/180/0 -c 100 http://10.10.18.111:80/test.html

   Here is an explanation of some of the args :

   -J :  anything following -J is passed to the JVM as is.

   -s :   Create the summary file and save it in the default output directory. Without this option, fhb will simply delete the files created after displaying the summary statistics. By saving the output, you can go back and look at it later if you want. A separate directory will be created for each run in /var/tmp (changeable via the -D option)

  -r : The rampup/steady-state/rampdown times for the run in seconds.

  -c : The number of connections. (You may want to start with 1, 10 before trying larger numbers)

The final parameter is the URL that is accessed. Note that the URL can also point to a php/perl script if you want to include mod_php/mod_perl in your testing.

fhb Output

When you run the tool, you will get output like the following :

Bytes Read from class :1525
Aug 6, 2007 12:49:43 PM com.sun.faban.driver.core.MasterImpl runBenchmark
INFO: RunID for this run is : 1
Aug 6, 2007 12:49:43 PM com.sun.faban.driver.core.MasterImpl runBenchmark
INFO: Output directory for this run is : /var/tmp//faban_cd/1
Aug 6, 2007 12:49:46 PM com.sun.faban.driver.core.AgentImpl run
INFO: http_driver1Agent[0]: Successfully started 100 driver threads.
Aug 6, 2007 12:49:46 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Started all threads; run commences in 1995 ms
Aug 6, 2007 12:50:48 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Ramp up completed
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Steady state completed
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl executeRun
INFO: Ramp down completed
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl getDriverMetrics
INFO: Gathering http_driver1Stats ...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Printing Summary report...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Summary finished. Now printing detail xml ...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Summary finished. Now printing detail ...
Aug 6, 2007 12:53:48 PM com.sun.faban.driver.core.MasterImpl generateReports
INFO: Detail finished. Results written to /var/tmp//faban_cd/1.
ops/sec: 7490.989
% errors: 0.0
avg. time: 0.006
max time: 1.001
90th %: 0.05
Saving output from run in /var/tmp//faban_cd

The output files are saved in /var/tmp/faban_cd/1 for this run. Currently, the summary output is only in xml, but it's a simple matter to write an xsl script to convert it to whatever format you want. (The final release will include such a script).

Response Times

A note on response times - I often see reports quoting average response times. In fact, most common benchmarking tools only capture the average response times. If you are concerned about the response experienced by your web customer, then average response times just don't cut it. That is why, most standard benchmarks (like TPC, SPEC) specify a 90th percentile response time criteria - the time by which 90% of the requests should have been completed. This is a reasonable measure - if 90% of your customers see an adequate response, you're site's probably performing okay.

To illustrate my point, look at the response times from the sample output above. The avg. time is 0.006 seconds but the 90% time is 0.05 seconds, nearly ten times the average !
 

fhb Release

fhb will be part of the impending 1.0 release of Faban. At that time, it should include documentation and a post-processing script for the summary xml output. But please do check it out now. That way, we can incorporate any feedback before it's release.

But don't just stop at using fhb and doing simple GET requests to test your web server. Drive a realistic application to get a better sense of how it performs. You can use Faban's full power to write a load generator for any application or if you already have a full benchmark, drop it into Faban's harness to capitalize on it's many wonderful features such as automated monitoring, graphing and tabular viewing of results, results comparisons etc. Check out all the functionality it offers at the Faban website.
 


Tuesday Jul 03, 2007

APC Update

Many folks have reported problems with APC in Cool Stack 1.1 resulting in a SEGV in the CoolTools Forums.
The APC version in Cool Stack 1.1 is APC 3.0.11 and if enabled, extensions such as mysql, dtrace etc. fail with a SEGV. Media Wiki doesn't work either.

We have tested APC 3.0.14 and this version seems to work much better and I'm keeping my fingers crossed that it will work for you as well. Our performance testing doesn't show any substantial differences between 3.0.11 and 3.0.14. If you want to give it a try, simply download the correct file for your platform and rename it to apc.so in your php extensions directory /opt/coolstack/php5/lib/php/extensions/no-debug-non-zts-20060613.

APC 3.0.14 for Cool Stack 1.1 (SPARC)

APC 3.0.14 for Cool Stack 1.1 (x86)

Friday Jun 15, 2007

lighttpd on Solaris

Build notes and Tuning tips for Solaris

lighttpd seems to be increasingly popular, so much so that netcraft has started tracking it's use on production websites. I've spent some time building, tuning and running simple performance stress tests on lighttpd 1.4.15 on Solaris and thought I'd share what I learnt.

Building lighttpd

My build of lighttpd uses the openldap library from Cool Stack. I also built and installed pcre-7.1 in /opt/coolstack using the following script :

#!/bin/sh
INSTALLDIR=/opt/coolstack
CFLAGS="-fast -xipo -xtarget=generic"

make distclean
./configure --prefix=$INSTALLDIR CFLAGS="$CFLAGS"
make
make install

And finally, here is a script to build lighttpd using Sun Studio compiler on Solaris :

#!/bin/sh
INSTALLDIR=/opt/coolstack
LDFLAGS="-L$INSTALLDIR/lib -L/usr/sfw/lib -lsendfile -R/usr/sfw/lib"
CFLAGS="-fast -xipo -xtarget=generic"
PCRECONFIG="$INSTALLDIR/bin/pcre-config"
PATH=/opt/SUNWspro/bin:/usr/ccs/bin:/usr/sbin:/usr/bin:
export PATH

make distclean

./configure --prefix=$INSTALLDIR/lighttpd  --with-pic \\ --with-openssl=/usr/sfw --with-ldap \\
--with-bzip2 --with-pcre=$INSTALLDIR --disable-ipv6 \\ CFLAGS="$CFLAGS" \\
LDFLAGS="$LDFLAGS" PCRECONFIG="$PCRECONFIG"

make
make install

Tuning lighttpd

By default, lighttpd uses poll as the event handler for Solaris; however it does include support for devpoll which scales better (no support for event ports yet). To use devpoll, add the following to your lighttpd.conf :

server.event-handler = "solaris-devpoll"

By default, lighttpd uses sendfilev to do the writes, but there are several issues in Solaris 10 and opensolaris which cause sendfilev to have  performance and stability problems:

6455727 - lighttpd cannot be killed because of hanging in senfilev()
6505740 - TCP does not wake up all waiting threads waiting for TCP zero copy completion notification
6532645 - implement ZCP sendfile for 64-bit apps
5003297 - sendfile/sendfilev should avoid data copy

Some of these issues are already fixed in later Nevada builds, but if you're running on Solaris 10,  it's best to use writev by adding the following :

server.network-backend = "writev"

If you're running on a multi-core system, it is necessary to set max-worker as well, as lighttpd by default is single-process and single-threaded. It is best to set the number of workers to two times the number of cores :

server.max-worker = 4


For other generic lighttpd tuning tips, see the lighttpd documentation.

Friday Jun 01, 2007

Debugging AMP

In a previous post, I mentioned the availability of the dtrace extension for Cool Stack's PHP.  Using this extension and the Cool Stack MySQL, it is possible to analyze the performance of your application running on this stack. At JavaOne, we demoed this using the open source MediaWiki and SugarCRM applications.  dtrace is especially useful in analyzing complex multi-tier applications like AMP. Thanks to Angelo Rajadurai for the creation of the scripts that I describe below.

Analyzing PHP calls

So, let's look at a simple dtrace script that counts how many times a particular PHP function is called :

#!/usr/sbin/dtrace -Zqs

php\*:::function-entry
/arg0/
{
        @[copyinstr(arg0)]=count();
}

You can copy the above to a file named php.d and start it just before the operation you want to analyze. For example, if you want to see which php functions are called when you click on Edit in your mediawiki site, start the script, click the link, then terminate the script via \^C.

Analyzing MySQL calls

The following script will print the SQL command issued by the mysql process. Invoke the script with the pid of the mysqld process as an argument, just before you execute the operation you want to debug.

#!/usr/sbin/dtrace -qs

/\* This program prints the SQL commands executed by MySQL
 \* process. It takes the pid of the mysqld process as its
 \* only argument.
 \*
 \* This script observes the occurance of dispatch_command()
 \* function and prints the SQL statement (arg2). Before you
 \* use this D-script replace the mangled name below with the
 \* output run from the mysql bin directory :
 \*/usr/ccs/bin/nm mysqld | awk -F '{ print $NF; }' | grep dispatch_command
 \*
 \*/

pid$1::_Z16dispatch_command19enum_server_commandP3THDPcj:entry
{
        printf("%d::%s\\n",tid,copyinstr(arg2));
}

 

Tracing thru PHP & MySQL 

Finally, here's a script to trace through both the PHP and MySQL calls. The output is color-coded : blue for PHP functions and green for MySQL functions (admittedly there are many different ways to format the output and this one's a bit of a hack - but it works). This script should be passed the pid of the mysqld process as an argument. Note that this script assumes that both PHP and MySQL are running on the same system. If they are running on different systems, then you will have to run the individual scripts shown above on the different systems.

#!/usr/sbin/dtrace -ZqFs

/\* We use terminal escape sequence to change color
 \* PHP - Blue
 \* MYSQL - Green
 \*/
 
php\*:::function-entry
/arg0/
{
        printf("\\033[01;34m%s\\033[0m\\n",copyinstr(arg0));
}

php\*:::function-return
/arg0/
{
        printf("\\033[01;34m%s\\033[0m\\n",copyinstr(arg0));
}

pid$1::_Z16dispatch_command19enum_server_commandP3THDPcj:\*
{
        self->sql = copyinstr(arg2);
        printf("\\033[01;32m%s\\033[0m\\n",self->sql);
}

 

 

Monday May 14, 2007

OpenSolaris WebStack launched

New OpenSolaris Project: Web Stack 

We just launched a new OpenSolaris project called Web Stack.  With the great response we have seen for Cool Stack, we thought this would be a good time to get the community to participate. Although the starting point for Web Stack is Cool Stack 1.1, we hope it will grow much beyond that. Please vist the project site to get more details on why we are doing this. We'd love to get your feedback on whether you think this is a good idea and what open source web technologies/apps you would like to see added.

And of course, we hope that you will participate in the project. You can help by building portions of the stack, add new libraries and applications, test binaries etc. So please do let us know whether you can help.

 

Shanti

 


Wednesday May 09, 2007

Building mod_jk

Here are the instructions for building mod_jk.

1. Download the source and untar it.

2. Ensure you have Sun Studio compiler in your PATH followed by /usr/ccs/bin. Save the following in 'make_solaris.sh' in the 'native' directory :

#!/bin/sh
INSTALLDIR=/opt/coolstack
APACHEDIR=$INSTALLDIR/apache2
make distclean
PATH=$INSTALLDIR/bin:$PATH
export PATH
./configure --with-apxs=/opt/coolstack/apache2/bin/apxs --with-java-home=/usr/java --with-prefix=$INSTALLDIR
make
make install

3. Go to the 'native' directory and run ./make_solaris.sh

4. Edit your httpd.conf file to include :

  LoadModule jk_module        modules/mod_jk.so

5. Restart the apache httpd server.

Let me know if this doesn't work for you.

Shanti

 

Tuesday May 08, 2007

Cool Stack at JavaOne

If you're at JavaOne this week, please do visit the "Solaris + AMP" pod (#976). We are demoing the use of SMF and dtrace on Cool Stack. You can see how dtrace can be used to debug and trace the code path through your entire application, starting from the Javascript in the browser, through PHP and finally to MySQL at the back-end. We are also distributing Cool Stack 1.1 on a DVD at the pod.

I will be there on Thursday between 11:00 - 3:00 PM, so please do stop by and say hi. 

Shanti 

Tuesday May 01, 2007

Dtrace support for PHP

Dtrace is one of the coolest things in Solaris 10, adding great observability to applications. A Dtrace provider for PHP that adds probes to function entry and exit points has long been available. I finally got around to integrating this with the php in Cool Stack. 

Bryan Cantrill has some excellent examples of how to use dtrace to trace through your php code, down to the system libraries and kernel (if you want to go that far !) .

Instructions for installing the php dtrace extension

1. Download the shared library for the extension :

    dtrace.so.x86.bz2

    dtrace.so.sparc.bz2

2. Install the extension :

    # bunzip2 dtrace.so.<arch>.bz2
  # cp dtrace.so.<arch> /opt/coolstack/php5/lib/php/extensions/no-debug\*/dtrace.so

3. Enable the dtrace extension :

  Edit your php.ini (default is in /opt/coolstack/php5/lib/php.ini) and add the line :

  extension="dtrace.so"

4. Check if the extension has been correctly loaded by executing a sample php script that takes a long time to run (you can use Bryan's blastoff.php to do this). 

#/opt/coolstack# dtrace -l |grep php
39131   php16088         dtrace.so                php_dtrace_execute function-entry
39132   php16088         dtrace.so       php_dtrace_execute_internal function-entry
39133   php16088         dtrace.so                php_dtrace_execute function-return
39134   php16088         dtrace.so       php_dtrace_execute_internal function-return
#

5. You should now be able to use the dtrace provider whenever you want using dtrace scripts. The dtrace provider will work with mod_php if you're running Apache or if you're running php stand-alone. Here is a sample command that will show which functions are called :

# dtrace -n function-entry'{printf("called %s() in %s at line %d\\n", \\
copyinstr(arg0), copyinstr(arg1), arg2)}' -q


Please feel free to contribute additional dtrace scripts.

 

Shanti



Cool Stack support is now available !

We now have Developer support for Cool Stack (or Solaris + AMP).  This dedicated online support service for developers provides technical
assistance for code support, diagnostic advice, and programming
questions that may include:

  • Sanity Checks
  • Code level support
  • Best practice guidance
  • Workarounds when available
  • And other forms of technical assistance

Sun Developer Expert Assistance Service is available to all developers,
with a cost of $49 (USD) per request or unlimited requests for an
annual subscription cost of $249 (USD).

Check out the details at http://developers.sun.com/services/expertassistance/

 

Shanti

 

Wednesday Apr 18, 2007

Web2.0 Expo

I attended the Web 2.0 Expo held in San Francisco between April 15-18. Here are some thoughts on some of the things that caught my attention.

 Rich Internet Applications

There was a lot of talk about RIAs, many products ranging from full-blown development environments to languages, language environments etc. Some examples include Apollo (new announcement from Adobe), Django, curl (no - this is not the OSS libcurl), Silverlight (Microsoft's response to Adobe !)
So it seems that the applications are once again moving onto the desktop - after moving from fat client to the web, we have now realized that the web environment really is not rich enough for many applications.

I can't help wondering why no one ever mentions Java applets in this context.  The technology was probably 10 years too early. Perhaps, if Sun had announced Java applets today at the Web 2.0 Expo, it would have passed muster as a RIA development technology. Think about it - Apollo has no threading support and is re-inventing the wheel of the "sandbox" that applets defined. Of course, we would have to tweak applets for today's multi-media apps, but I'll take programming in Java over Javascript any day. But alas, the day of the applet has passed. So let's move on.

RDF vs Microformats

There were several sessions talking about Microformats, the simple data formats that are more human readable. (It's funny how XML was touted as being both human readable and machine readable - today, it seems that microformats have that distinction).  This does seem like cool stuff - Mozilla is working on microformats support in Firefox, which means for example, that you can drag and drop contacts, appointments etc. to your desktop address box and calendar.

There wasn't a whisper about RDF . From the description of microformats, it appears that it is solving the same kinds of problems we expected RDF to solve. Microformats have a leg up on RDF because they work on any browser where as older browsers will barf at RDF. If developers start using microformats (some already have), I suspect RDF will be dead as far as its use in typical web pages (foaf:person vs hCard) and it will remain exclusively in the domain of the semantic web - which may be fine I guess. 

Browsers

Who can forget the browser wars ? If any ex-Netscape engineer out there reads this, please rejoice.  Everywhere I looked, every demo I saw (including screen-shots in presentations) used Firefox, not IE. With the exploding number of extensions and plug-ins, I really think this time around Firefox will win. It's just a matter of time !


About

I'm a Senior Staff Engineer in the Performance & Applications Engineering Group (PAE). This blog focuses on tips to build, configure, tune and measure performance of popular open source web applications on Solaris.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today