Interesting question, isn't it?
Will the time to recompile invalid objects post upgrade decreased once fixed object stats have been gathered?
First of all fixed object stats on X$-tables won't be gathered by default. X$ structures are undocumented. V$ views are build on top of them and should only be used even though it might be sometimes useful to access X$ tables such as X$BH (buffer headers - contains information describing the current contents of a piece of the buffer cache) sometimes directly.
Anyway, back to the upgrade topic. Post upgrade you'll have a good bunch of invalid objects in your database. You would start now the recompilation with either utlrp.sql or, to decrease CPU load with utlrprp <number> to recompile with a distinct <number> of parallel threads. Since Oracle 10g we are using the package UTL_RECOMP for recompilation taking the object dependencies into consideration. Therefore jobs will be created to run the recompilation tasks in parallel and leverage from the CPU power of multiple cores. In both cases you'll get 4 queries to monitor progress and jobs as script output displayed.
In larger sized databases with many objects and components our recommendation is always to gather first fixed object stats prior to start the recompilation. Some time ago I've learned from to very different customer database projects that these stats will speed up the efficient job creation for recompilation. And last week I've got this feedback from an EBS 220.127.116.11 to 18.104.22.168 upgrade project:
SQL> exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
It's always worth a try - you won't see a dramatic effect on a small ORCL type database with some thousand tables, indexes and views. But in a Siebel, Peoplesoft, EBS or any other huge application nvironment the effect might be remarkable. Still I agree in this EBS case: 12 hours recompilation time is a lot. The only way to decrease this time will be the (temporary) addition of CPU power to the system to satisfy more parallel recompilation threads.