Tuesday Sep 08, 2009

My New Blog

Take a look at my new blog, Zero Credibility. Just moving things over to Wordpress in order to have a little more control over my content in the future.

Monday Jul 13, 2009

Linux Desktop (still) Not Ready For Primetime

The other day, I got an itching to replace the Windows XP OS on my media PC with Linux. It's an older machine that's low on memory and quite slow because of this. However, it does the job as a media PC - it plays all audio and video formats which is about all I need. Nonetheless, boredom drove me to burn and install Ubuntu 9.

The install was simple and went smoothly, but I quickly ran into the first problem. I have a NAS device that runs an embedded Samba server. Windows, and Mac for that matter mount it fine. Linux? Nope. A little googling on this told me that kernel 2.6.25 broke compatibility with the device's Samba implementation. Luckily, it has a USB port as well, and that did work.

After getting my media disk mounted as a USB device, I went to play some movies. I first tried the built in movie player application. It would start, then exit immediately. No errors. I next installed VLC. Same thing.  Googling on this topic informed me I need to change the video output device from "default" to "X11". After changing this, videos played on both applications (sort of).

Now, when the videos played, I saw screen tearing now and then when a lot of action occurred. For the life of me, I could not find how to rectify this. A new video driver? I found some instructions for certain types of video cards. I have a generic on-board Intel graphics controller. The colors also looked washed out, but I can't really quantify this as a real problem.

 Finally, as Boxee is getting so much hype and it doesn't work on Windows (there's an Alpha now actually), I decided to install that. Apt-get did the job efficiently. Running Boxee got me a blank screen. Googling again showed me that Boxee doesn't support the latest version of Ubuntu which of course is the one I installed. 

All in all I ended up with less functionality and poorer performance, and I lost about 4 hours of my life getting as far as I did. 



Monday Jun 01, 2009

Validation Error: Value is not valid

There's a reason why most JSF apps stick with string values when it comes to select boxes. Passing object values back and forth is fraught with annoying little gotchas. Here's the latest I ran into.

I set up a select box with an object value and a converter. The select items have object values. All is good. When I submit the form, I see "Validation Error: Value is not valid". My first reaction was that I didn't have any validation on the page, so how could a value be not valid?

This error happens, among other reasons, because the value of the select doesn't match any of the values in the select items. That means the value's class needs to define equals(). 

Friday May 08, 2009

Live Search in JSF

Suppose we have list of things, or a table of things, and a search box. The list should update based on what the user types. Additionally, the search should be live, in that the table updates as the user is typing; they are not required to hit enter, click a button, or otherwise take some action.

For the first attempt at this, let's add a valueChangeListener to your inputText component.

<ice:inputText
  value="#{aBean.searchFilter}"
  valueChangeListener=#{aHandler.search}/>

The problem is that this is not triggered until the form submits. You're going to need to add some Javascript to make this happen. For the next attempt, let's hook some Javascript into the component's keyup event,

<ice:inputText
  value="#{aBean.searchFilter}"
  onkeyup="doSubmit(this)"/>

 and,

function doSubmit(element) {
  iceSubmitPartial(
    document.getElementById("form"),
    element,
    MouseEvent.CLICK
  );
  setFocus(element);
}

Note that I'm using Icefaces, and making use of their utility Javascript function icePartialSubmit() to do a "partial submit". In a nutshell, this submits only this element and ignores the rest of the form. The call to setFocus() avoids losing focus in the input after the submission (another Icefaces utility).

This solution works fine, as long as the user is a slow typer. To understand what's wrong, suppose the user (quickly) types "java".

  1. user types "j"
  2. keyup occurs for "j"
  3. form is submitted
  4. bean is updated with value "j"
  5. user types "a"
  6. response is return from #3
  7. input is updated with value from backing bean: "j"

The response overwrites the "a" that the user just typed. What we want to do is add a delay in the form submission. If the keyup occurs, queue a form submission. If another keyup occurs, unqueue the first form submission and queue another, etc. It's set/cancelTimeout() to the rescure for this,

function doSubmit(element) {
        iceSubmitPartial(
            document.getElementById("form"),
            element,
            MouseEvent.CLICK
        );
        submitTimeout = null;
        setFocus(element);
}

function submitNow(element) {
    if (submitTimeout != null) {
        clearTimeout(submitTimeout);
    }
    submitTimeout = setTimeout(function(){doSubmit(element)}, 1000);
} 

The component binds keyup to submitNow(),

<ice:inputText
  value="#{aBean.searchFilter}"
  valueChangeListener=#{aHandler.search}
  onkeyup="submitNow(this)"/>

We use a delay of 1s. This still isn't perfect, because there's still a chance that the user will type something between a form submission and response. It is however much less likely considering average typing patterns.


Thursday May 07, 2009

JSF Actions on the URL

In a recent entry, I described a mechanism for passing a JSF view ID as a GET parameter to force JSF to a particular view. This entry will improve on that by allowing a JSF action to be specified a a GET parameter.

After writing the previous entry, I started thinking ... JSF already has a mapping from action to view ID, why not make use of that to simplify the URL? Instead of:

/faces/home.xhtml?viewId=/faces/other.xhtml

we have this,

/faces/home/xhtml?jsf.action=success

So, from the current view of /faces/home.xhtml, and action success, go to the view defined in faces-config.xml's navigation rules. For example,

     <navigation-rule>
        <from-view-id>/faces/summary.xhtml
        <navigation-case>
            <from-outcome>success</from-outcome>
            <to-view-id>/admin/facelet/finished.xhtml</to-view-id>
        </navigation-case>
        ...
     ...

If our current view is /faces/summary.xhtml, and the jsf.action=success parameter is present, then move to view ID /facelet/finished.xhtml. This is rather slick as it makes use of existing rules; no additional configuration is required. Anyway, here's the code:

public class ActionPhaseListener implements PhaseListener {

    public RedirectPhaseListener() {
    }

    public PhaseId getPhaseId() {
        return PhaseId.RESTORE_VIEW;
    }

    public void beforePhase(PhaseEvent phaseEvent) {
    }

    public void afterPhase(PhaseEvent phaseEvent) {
        if (navigationRules == null) {
            navigationRules = new NavigationRules();
        }

        FacesContext ctx = phaseEvent.getFacesContext();
        HttpServletRequest request =
                (HttpServletRequest) ctx.getExternalContext().getRequest();

        String action = request.getParameter("jsf.action");

        if (action != null) {
            String currentViewId = ctx.getViewRoot().getViewId();
            NavigationRule nr = navigationRules.getNavigationRules().get(currentViewId);
            if (nr == null) {
                nr = navigationRules.getNavigationRules().get(null);
            }
            NavigationCase nc = nr.getNavigationCases().get(action);
            if (nc != null) {
                String newViewId = nc.getToViewId();
                UIViewRoot page = ctx.getApplication().getViewHandler().createView(ctx, newViewId);
                ctx.setViewRoot(page);
                ctx.renderResponse();
            }
        }
    }

    private NavigationRules navigationRules = null;
} 

in the interest of not boring you, NavigationRules.java is linked. It reads faces-config.xml by using ServletContext.getResource() and parses it into bean-like objects using JDOM.

Alain commented on my previous post that the solution there was problematic as it allowed access to arbitrary view which may depend on objects being set up in other views (think of a work flow / wizard). This solution has no such problem, as you can only navigate according to the rules you've already defined in faces-config.xml.



Tuesday May 05, 2009

Poor Man's JSF Navigation By URL

Every wanted to access a JSF view by directly entering the view onto URL? This is a common request. In my case, I needed to link to specific places in my JSF app from a legacy non-JSF application. This is the type of thing you'd think would be straightforward, but JSF falls flat.

I scoured the web for solutions to this, and this is the simplest: implement a phase listener. In the interest of getting right to it, here it is:

public class RedirectPhaseListener implements PhaseListener {

    public RedirectPhaseListener() {
    }

    public PhaseId getPhaseId() {
        return PhaseId.RESTORE_VIEW;
    }

    public void afterPhase(PhaseEvent phaseEvent) {
    }

    public void beforePhase(PhaseEvent phaseEvent) {
        FacesContext ctx = phaseEvent.getFacesContext();
        HttpServletRequest request =
                (HttpServletRequest) ctx.getExternalContext().getRequest();

        String viewId = request.getParameter("viewId");

        if (viewId != null) {
            UIViewRoot page = ctx.getApplication().getViewHandler().createView(ctx, viewId);
            ctx.setViewRoot(page);
            ctx.renderResponse();

        }
    }
} 

The phase listener gets the request, and checks for a parameter. The parameter is the path to the view ID you want to visit. For example: /faces/someview.xhtml. With Facelets, these goto URLs end up looking funny, because the real view ID is in the URL also,

 /faces/home.xhtml?viewId=/faces/other.xhtml

Not nice, but it works. Other solutions I've seen try to get fancy by keeping a mapping a view+action result=new view. That's cleaner, and it's easy to do once you understand what's going on above.

Friday Apr 24, 2009

Cross-domain IFrame Resizing

IFrames are a great way to do a low-tech mashup. The first thing you will want to do after getting an IFrame on your page is to resize it to get rid of the scroll bars that are probably present. A little googling turns up solutions like this,

function resizeFrame(f) {
    f.style.height = f.contentWindow.document.body.scrollHeight+'px';
} 
 ...
<iframe onload="resizeFrame(this)" src="..." ... 

This works great as long as the IFrame source is in the same domain as your client page. If it's not, your page is prevented from getting or setting any attributes of the IFrame. To get around this, you can make a  local proxy to fool your page into thinking that the IFrame is in your domain. For example,

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" 
"http://www.w3.org/TR/html4/loose.dtd">
<%
    public static String scrape(String url) {
        try {
            URL u = new URL(url);
            BufferedReader in = new BufferedReader(
                    new InputStreamReader(
                    u.openStream()));

            String inputLine;
            StringBuffer b = new StringBuffer();
            while ((inputLine = in.readLine()) != null) {
                b.append(inputLine);
            }

            in.close();
            return b.toString();
        } catch (IOException ioe) {
            return "Could not scrape URL: " + ioe;
        } 
    }
%>
<%= scrape(request.getParameter("url"))%> 

Now your IFrame looks like,

<iframe 
  onload="resizeFrame(this)" 
  src="proxy.jsp?url=..."></iframe>

where ... is the page outside your domain. As soon as this page loads into your IFrame you will notice the next problem. Since our proxy is dumb and doesn't rewrite URLs, any relative URLs in the scraped page are broken, including images. To fix this, use the BASE tag in the head of the scraped page,

<html>
  <head>
    <base href="http://opensso.dev.java.net/console"> 
...

This obviously relies on you being able to modify the IFrame source, which is not possible in most situations.

A thought I had was to dynamically modify the base URI of the IFrame. Since the IFrame source is now local I expected that would be possible. Apparently, the base URI (an attribute of the document) is set when the IFrame loads and is read only from then on out. That left me with mucking with the markup before it was processed by the  browser. A bit of code to insert the BASE tag is all that was needed,

public class Scraper {

    private String url;

    public Scraper(String url) {
        this.url = url;
    }

    public String scrape() throws IOException {
        URL u = new URL(url);
        BufferedReader in = new BufferedReader(
                new InputStreamReader(
                u.openStream()));

        String inputLine;
        StringBuffer b = new StringBuffer();
        while ((inputLine = in.readLine()) != null) {
            b.append(inputLine);
        }

        in.close();

        String base = getBase(url);
        String result;

        if (base != null) {
            result = setBase(b.toString(), base);
        } else {
            result = b.toString();
        }

        return result;
    }

    private static String getBase(String url) {
        try {
            URL u = new URL(url);
            StringBuffer b = new StringBuffer();
            b.append(u.getProtocol());
            b.append("://");
            b.append(u.getHost());
            if (u.getPort() != -1) {
                b.append(":");
                b.append(u.getPort());
            }

            return b.toString();
        } catch (MalformedURLException mfue) {
            return null;
        }
    }

    private static String setBase(String content, String base) {
        // remove base tag if it exists
        Pattern basePattern = Pattern.compile("<base.\*?>", Pattern.CASE_INSENSITIVE | Pattern.DOTALL);
        Matcher baseMatcher = basePattern.matcher(content);
        if (baseMatcher.find()) {
            // base is already set
            return content;
        }

        // add new base tag
        Pattern headPattern = Pattern.compile("<head>(.\*?)</head>", Pattern.CASE_INSENSITIVE | Pattern.DOTALL);
        Matcher headMatcher = headPattern.matcher(content);

        if (headMatcher.find()) {
            StringBuffer newHead = new StringBuffer();
            newHead.append("<head>\\n");
            newHead.append("<base href=\\"");
            newHead.append(base);
            newHead.append("\\" target=\\"_blank\\"/>\\n");
            newHead.append(headMatcher.group(1));
            newHead.append("\\n");
            newHead.append("</head>\\n");

            content = headMatcher.replaceFirst(newHead.toString());
        }

        return content;
    }
}

This solution is obviously limited in that it relied on you being able to put code on your server. Another solution I found relies only on modifying the source of the IFrame, but I found the hackiness offensive.

Tuesday Apr 07, 2009

Casting Generics

Consider the following coding problem

class A { ... }
class B extends A { ... }
List<A> aList = ...
List<B> bList = ...
List<A> getAList() {
    return bList;

}

This doesn't compile. Why? Because while A is a subclass of B, List<A> is not a sublclass of List<B>. Despite that, it's quite common that you'd want to do something like this.  Consider a more concrete example.

interface TreeNode { ... }
interface ContainerTreeNode extends TreeNode {
    List<TreeNode> getChildren();
}

class Tree(TreeNode rootNode) { ... }

To make themselves tree-storable, other classes just implement TreeNode or ContainerTreeNode. The problem is in the implementation of ContainerTreeNode. For example,

class Employee implements TreeNode { ... }
class Manager implements ContainerTreeNode {
    List<Employee> employees ...;
    ... 
    List<TreeNode> getChildren() {
        return employees;
   }
}

You'd like to be able to do this, but it won't compile for the reasons stated above. The solution is to use the <? extends T> idiom. Declare getChildren() as such,

 List<? extends TreeNode> getChildren();

Thursday Mar 26, 2009

icefaces: remove with effect

So you have your JSF+Icefaces application that displays some list of things, and you want the user to be able to remove items from the list. Add in a small twist: the removed item should make use of the Icefaces Effect SDK and fade out the removed item.

Sounds good so far. Add the effect attribute to your component,

<ice:panelGroup
  effect="#{bean.effect}"
  ...

Now go to your remove action (listener) method and add the effect initialization to your bean,

public void removeListener(ActionEvent e) {
  beanList.remove(bean);
  Effect e = new Fade();
  e.setSubmit(true);
  e.setTransitory(false);
  bean.setEffect(e);

When you run this code, you won't see the effect. A little thought makes the problem obvious. When the bean is removed from the list, the view is updated immediately. Since the bean is no longer in the list, it is not rendered, at all, with any effect. How do we get our remove effect then?

There's actually two solutions. The first is to simply not remove the item from the list, and keep a “visible” flag indicating if the item is really in the list. The most obvious way to do this is to change removeListener() above to,

public void removeListener(ActionEvent e) {
  beanList.setVisible(false);
  Effect e = new Fade();
  e.setSubmit(true);
  e.setTransitory(false);
  bean.setEffect(e);

and change your view,

<ice:panelGroup
  effect="#{bean.effect}"
  visible=”#{bean.visible}”
...


However, when you run this code, you get the same result. The item disappears immediately and you do not see the effect. It's the same problem. The visible flag is set immediately and the component is hidden in the view immediately. The trick here is an undocumented aspect of the Effect SDK. If the effected component has a visible attribute, and it's bound to a value, the effect will set the value to true or false depending on the type of effect. For example, a Fade sets it to false, and Appear sets it to true. Keeping this in mind, we remove the setVisisble(false) call from the removeListener() method and we're done.

This is not a great solution though, as it requires you to check the visible flag wherever you consider the list of items in your business logic. The second solution involves creating a phase listener that really removes the item from the list, after the effect completes. It goes like this. First create a bean to represent phase event,

public class PhaseEventAction {

public class PhaseEventAction {
  private PhaseId phaseId;
  private boolean doBeforePhase;
  private String action;
  private Object[] arguments;
  private Class[] parameters;

 

Now create an implementation of PhaseListener to process the above phase events,

public class QueuedActionListener implements PhaseListener {
  public PhaseId getPhaseId() {
    return PhaseId.ANY_PHASE;
  }

  public void beforePhase(PhaseEvent pe) {
    checkForOperations(true, pe);
  }
  public void afterPhase(PhaseEvent pe) {
    checkForOperations(false, pe);
  }
  private void checkForOperations(boolean doBeforePhase, PhaseEvent evt) {
    FacesContext fc = FacesContext.getCurrentInstance();
    QueuedActionBean qab = (QueuedActionBean)fc.getApplication().
      createValueBinding("#{queuedActionBean}").getValue(fc);
    List<PhaseEventAction> invoked = new ArrayList<PhaseEventAction>();

    for (PhaseEventAction pea : qab.getPhaseEventActions()) {
      if (pea.getPhaseId() == evt.getPhaseId()) {
        if (pea.isDoBeforePhase() == doBeforePhase) {
          javax.faces.application.Application a = fc.getApplication();
          MethodBinding mb = a.createMethodBinding(pea.getAction(), pea.getParameters());
          if (mb != null) {
            mb.invoke(fc, pea.getArguments());
            invoked.add(pea);
          }
        }
      }
    }

    qab.getPhaseEventActions().removeAll(invoked);
  }
}  

The idea is that we're creating a queue of events. Specifically in this case, clients will add remove events to the queue to be processed not until the RENDER_PHASE. Let's keep going. Create a bean to hold the queue,

public class QueuedActionBean implements Serializable {
  private List<PhaseEventAction> phaseEventActions = 
    new ArrayList<PhaseEventAction>();
  public List<PhaseEventAction> getPhaseEventActions() {
    return phaseEventActions;
  }
}

Now in faces-config.xml, declare the queue bean,

<managed-bean>
  <managed-bean-name>queuedActionBean</managed-bean-name>
  <managed-bean-class>com.sun.identity.admin.model.QueuedActionBean</managed-bean-class>
  <managed-bean-scope>session</managed-bean-scope>
</managed-bean>

and declare the phase listener,

<lifecycle>
  <phase-listener>
    com.sun.identity.admin.model.QueuedActionListener
  </phase-listener>
</lifecycle>

Client beans that want to play this game inject queuedActionBean into themselves,

<managed-property>
  <property-name>queuedActionBean</property-name>
    <value>#{queuedActionBean}</value>
  </managed-property>
</managed-bean>

and add PhaseEventAction objects onto the queue. JSF calls QueuedActionListener, which processes the events if the conditions are right. In this case, we want the phase ID to be RENDER_RESPONSE and doBeforePhase to be false. Here's our new removeListener() that includes the code to add the PhaseEventAction to the queue,

public void removeListener(ActionEvent e) {
  Effect e = new Fade();
  e.setSubmit(true);
  e.setTransitory(false);
  bean.setEffect(e);
  PhaseEventAction pea = new PhaseEventAction(); 
  pea.setDoBeforePhase(false);  
  pea.setPhaseId(PhaseId.RENDER_RESPONSE);
  pea.setAction("#{viewConditionsHandler.handleRemove}");
  pea.setParameters(new Class[] { SomeClass.class });
  pea.setArguments(new Object[] { someObject });

  getQueuedActionBean().getPhaseEventActions().add(pea);
}

While we used this to solve a very specific problem, the solution is generalized. Set into the PhaseEventAction the phase, whether to do the action before or after the phase, the action method, the signature of the action method, and the arguments to pass to the action method.

Wednesday Oct 15, 2008

Growing a VMWare Fusion Disk

This is probably the most common operations users will want to perform on existing VMs. On the Hard Disk settings page for the VM, you can find a nice little slider for the disk size. Simple enough you think. Change it, apply, and wait while it chugs for a long while. Reboot your VM, and find that to your surprise the VM still sees the old smaller disk size.

This makes sense when you understand what changing the disk size in the Hard Disk settings page for Fusion really does. It changes the disk size, but it doesn't update the partition table for guest OS. Hardly what the average user expects.

If you Google on this topic, you'll find lots of information about using archaic VMWare command line tools for this purpose. Most of the threads suggest the creating a new VM and reinstalling the guest OS is the easiest option (!!!), Luckily there's a better solution. Download gpartd, a Linux partition manager that understands NTFS as well. Get the live CD (iso), and configure VMWare to boot from it. You need to tell your guest OS to boot from the CD. Do this by quickly pressing F2 before it boots, and changing the boot device order. gpartd has a fairly nice GUI with an obvious "resize" option.

As I mentioned earlier, this is not what the average user expected from Fusion's uber-high level settings GUI. The grow disk option should either be removed (from the GUI), or it should do the obvious (grow + re-partition to use all the added space). If the user wants to do something different like create multiple partitions, let them use the command line utilities. There's not many reasons to do anything other than use all the space in a single partition.

Friday Jun 06, 2008

Importing / Exporting Virtual Disk Images with Virtual Box

Virtual Box is amazing for an open source project. It comes close to VMWare in usability. Where it falls short is importing and exporting images. There's no support in the user interface for this. If you feel comfortable using the command line, you can export and import a disk image. Here's how,

To export:

  1. Shut down the VM
  2. Use the "VBoxManager clonevdi ..." CLI to copy the disk image. On my Mac this was at /usr/bin/vboxmanage

To import:

  1. Register the disk image: File>Virtual Disk Manager>Add
  2. Create a new VM, and use the disk image you added in step 1

The obvious problem is that the .vdi file is just a disk image. It doesn't capture the virtual machine. You must know the correct parameters to use when you create the VM in the import process or the disk image won't be able to run. It also doesn't capture the state of the virtual machine. 

Vbox has a very nice snapshot feature. If one could import / export snapshots it would be perfect.

Wednesday May 07, 2008

Portlet Syndication

Portlets can be syndicated to existing websites to augment their dynamic content. Check out the WebSynergy project for a a demo of this.[Read More]

Thursday Jul 12, 2007

Project Sluts

In an interview with businessofsoftware.org, Tim Lister defined three software project anti-patterns that are interesting if for no other reason than that they have colorful names.

A Project Slut is an organization that takes on too many projects ("it just can't say no" he says). The result is that nothing gets done, or nothing gets done well. I can say as a developer that this has a negative effect on team moral. The feeling that everything is a hack or quick fix and rushed out the door to users is not a good one. It's hard to take pride in your work. There's a lot to be said for picking fewer things and doing them well. Developer's productivity is in proportion to their pride in what they are doing. If they don't feel good about it, it's much less likely you'll see them there after hours making sure it's done well.

A Brownian Project is one has too many people added too early on. For example, the project might have too many people involved at the design phase such that the design progresses slowly as a result of trying to incorporate the ideas of everyone involved ... or worse, you end up with what I'd call a "compromise design". A compromise design sacrifices coherence to appease the people involved in the design. This is often the case with specifications that are designed by a committee of interested software development corporations.

Notice that I said "corporations". I am reminded of the JDO2 specification, which was controversial in it's time because it had overlap with an up and coming specification we now call EJB3. Reading the official "votes" from various companies, many were against it. Nevertheless, Sun pushed it through adding to later confusion of many users when JPA came to the scene. I'll never forget the Apache Software Foundation's comment though: "Let a thousand flowers bloom" (a yes vote).

A Dead Fish project is one that sets an unrealistic date, or impossible requirements, etc., and therefore is doomed to fail. This doesn't strike me as a useful pattern, or a pattern at all. A pattern is something you can use to help you solve a problem. No one starts a project with the foreknowledge that it's going to fail. Pick any project, and you can find at least someone that thinks it's going to fail, and someone that thinks it will succeed. This pattern can only be applied in hindsight, which makes useless.

Wednesday Jul 11, 2007

iTunes + iPod + Shared Disks: Managing A Large, Shared Library

Most folks that have Macs own a notebook version. If you have a significant music / video library, you won't have the disk space on your notebook. This is one thing I never understood about iTunes. Any significant music collection is going to swamp the notebook's disk. The defaults are all set to store the library on the local disk, and to actually make copies of files you add to the library in the iTunes folder. For the latter, if you don't actually understand what's going on, you could wind up with multiple copies of files hanging around.

Another point is that most people want to share their library between users in their household. I think there's a way to share your iTunes library from your Mac (I've never tried it), but that assumes that any consumer of the media is also using iTunes. iTunes is not a general purpose media player. I have a lot of formats it won't consume, so I use other players like VLC in some cases.

After being an early adopter of several different types of digital audio players, the iPod model was a hard pill to swallow.  The basic approach is the iPod is a read-only device to everything except iTunes, and who's format and content is completely managed by iTunes. There's only one reason the iPod works this way: control. Apple can force you to go through iTunes, which is as much a storefront as anything else, to manage your library. The iPod could easily have been a standard USB mass storage device, and allow files to dragged and dropped onto it. But then you wouldn't be forced through iTunes, and the iTunes Store. To be fair, this would require more logic on the iPod to index the library, which might have been a technically harder problem. Also, the iPod only plays AAC format, so even if you import MP3 files into your iTunes library, they are converted to AAC before they are copied to the iPod. Again, this allows the iPod to be simpler in that it doesn't need to understand anything other than AAC.  It can rely on iTunes for that. This probably makes importing music slower, but that is not an everyday thing for most people. The only good thing here is that iTunes hides this detail from you.

Unfortunately, the iPod model is used by most other digital audio manufacturers (there are a few exceptions). The difference is that for them, the software is not iTunes, it's Windows Media Player. So when I was faces with the choice of purchasing on oddball brand that supports USB mass storage, a player that requires Windows, or an iPod, I chose the lesser of the three evils.

But anyway, back to the problems of disk space and sharing media. This is my best solution ...

To get around the limited disk space / sharing problem, the actual content of my library is stored on a large shared disk. I use a neat little NAS device which is essentially a drive enclosure that runs a slimmed down version of Linux with the purpose of running an SMB and FTP server (only, that's all it does). This particular device has been pretty darn stable for me. I chose it because it's an enclosure, not a sealed unit already containing a drive. This allowed me to buy a cheap 500GB drive and keep the cost down. The only drawback is that it is an ATA-100 interface, not SATA. You'd think this would be an advantage. ATA-100 is older, and the drives should be cheaper. However, it's so old that the drives are getting hard to find and they are about the same price. SATA is much faster, but that doesn't matter as ATA-100 is very much fast enough to keep up with either the USB or network interfaces to the drive.

A drawback of "all on the shared disk" is that the library doesn't leave the house with you. It not on your notebook. Well, that's why I carry the iPod around with me.

With the NAS, I can mount the SMB drive on any computer in the house and access the media in any way I want. For example, an integral part of my "media center" is an older Windows PC connected to my "TV" and some cheap computer speakers. I use this to play music and  video .avi, .mpg, streamed content from the internet, etc., movies. Of course the PC plays DVDs also so I could throw out my DVD player too. The PC just mounts the shared disk and plays the files from there. You don't need a powerful, "media PC" for this, and it doesn't have to be a high-tech expensive solution. Just about any PC manufactured in the last 6 years can play DVDs, .mp3s, and .avi files. It's a PC simply because I had one lying around. It could also be an Mac Mini.

As a side note, I purchased a nifty wireless keyboard for the PC. I set the fonts big, and I can sit on my couch and control everything. Very nice, even if the keyboard makes a rather large remote. This model is pretty small, and has a built in trackball mouse so the keyboard and mouse are one single unit. It's worked flawlessly, although you wouldn't want to do a lot of work with it as it's small and the trackball isn't the best. FYI, if you're looking at wireless keyboards, there are models w/ a range of "up to 10 feet" then there's the "up to 50 feet" models. Get the latter, even if you're within 10 feet of your media center, they get flaky at 3-5 feet.

For iTunes, I mount the drive via SMB and import the media from the shared drive. This must be slower, but I don't know how much because I've never done it any other way. I imagine it's a lot slower, as in this case it is accessing the shared drive over a wireless connection. Anyway, with about 20G on the iPod, it take about 5 minutes to do a no-op sync operation from iTunes. That's okay by me, as I can just connect it and walk away, and it's always done by the time the iPod is charged anyway.

One problem I ran into recently ... the shared drive is just a DHCP client, so it can potentially get any of the IP addresses in the block that is made available on my router. I could go into the router and force it to assign a particular IP address to the NAS each time, but the problem with that is I've needed to "reset" my router several times which makes it lose this type of configuration, and it's a pain in the arse to configure, so I just don't like to add any complex config to it.

So the other day, I needed to reboot the router. My NAS picked up a different IP, which was a different automount point (I am automounting the NAS SMB point). iTunes could no longer resolve any of my library. Neat. There is no way to fix this through iTunes itself, except by "locating" each individual file. There are no utilities I could find to do it either.

In retrospect, I should have created a symlink to the mount point, and then imported from that location. Then if the IP of the NAS changes, I can just change the link. I did this now, but I didn't think of that initially.

As it turns out, it's pretty easy to muck w/ the iTunes library. There's a binary formatted version, but there's also a XML backup. To solve this, I sed'd the XML library file and changed the "location" of all of the media to the symlink.

sed -e "s#file://localhost/private/Network/Servers/192.168.0.10[10]/PUBLIC#file://localhost/Users/jtb/mnt/PUBLIC#g" "iTunes Music Library.xml" > ../new.xml 

(Noticed that I used a different separator in the sed expression to avoid escaping all of the /'s) 

Then I removed all of the iTunes files, restarted iTunes, and did a File->Import Library operation on the new XML library I created. This took a long time to re-import everything and calculate "gapless playback" (whatever) but it seems to have worked.

One think that stuck for for a while was the "location" in the XML library. It's a file:// URL, so I originally tried ...

file://Users/jtb/mnt/PUBLIC (this is my symlink to /Network/Servers/...)

As it turns out, iTunes wants a HOST NAME in the FILE URL, like this:

file://localhost/Users/jtb/mnt/PUBLIC

Not what you'd expect.

Of course, the key to using iTunes this way is to set your preferences to NOT copy files, and NOT rename them (see Preferences->Adavanced). Basically, tell iTunes to leave the media files alone and just use them from their original imported location.
 

 

 


 

Thursday Jun 21, 2007

Authoring Community Services with Portal Server

My new article just went live: Authoring Community Services with Portal Server. It explains, in great painful detail, what it takes to augment the stock set of community services.

While Portal Server comes with a stock set of services including blog, wiki, surveys, polls, discussions and file sharing, some users will want to add their deployment specific services. This article will help you do that. If you have any questions on the article or new need further elaboration, please feel free to contact me at: jeffrey.blattman@gmail.com, or subscribe to the OpenPortal users alias ... OpenPortal is the open source version of Sun Portal Server.

 

About

jtb

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today