X

Recent Posts

Oracle VM Templates теперь доступны для платформы SPARC

Oracle VM Templates предоставляют возможность быстрого развертывания предварительно установленных и настроенных образов операционных систем. Используя Oracle VM Templates, вы избавляетесь от необходимости устанавливать и настраивать операционную систему и программное обеспечение. До недавнего времени такая возможность существовала только на платформе x86. С сегодняшнего для Oracle VM Templates доступны и для SPARC. С выходом Oracle VM Manager 3.2 появилась возможность обнаруживать в сети сервера SPARC с технологией виртуализации Oracle VM Server for SPARC, и управлять виртуальными машинами точно так же как и Oracle VM Server for x86. Таким образом использовать все преимущества Oracle VM Templates. Для получения большей информации рекомендую прочитать Oracle VM Templates now available on SPARC platform. Посетите Oracle Software Delivery Cloud чтобы загрузить доступные OVM Templates.

Oracle VM Templates предоставляют возможность быстрого развертывания предварительно установленных и настроенных образов операционных систем. Используя Oracle VM Templates, вы избавляетесь от...

Самостоятельная сборка и публикация пакетов IPS в Solaris 11

У меня на ноутбуке стоит Solaris 11 и на нём в свободное время я пишу для себя некую программу для управления доменами Oracle VM Server for SPARC (aka LDoms). Пишу на Python/GTK/NetBeans. Но не это главное. Главное в том, что мне понадобилось иметь в системе пакет pylibssh2 для того, чтобы подключаться из python по ssh к удаленным хостам.Казалось бы можно выкачать pylibssh2 и libssh2, собрать и поставить. Но я захотел, чтобы эти пакеты были оформлены в виде пакетов Solaris IPS. Заодно я решил и научиться собирать пакеты.Сразу оговорюсь, я не большой специалист по компиляторам и сборщикам. Я лишь знаю, что нужно запустить configure и make как описано в README ;)Параллельной задачей будет показать как не использовать LD_LIBRARY_PATH. Тем более ни в коем случае не прописывать её в /etc/profile.Я не буду очень подробно описывать каждый шаг. За деталями прошу обращаться в маны и по линкам в конце поста. Собственно, этим инструкциями я и следовал.Итак, шаг 0. Поднимаем репозиторий в виде самой обычной директории. Для более продвинутых, можно поднять и в виде сервиса, чтобы можно было отдавать пакеты другим клиентам. # zfs create rpool/export/repo # zfs set atime=off rpool/export/repo # chown roivanov:staff /export/repo $ pkgrepo create /export/repo $ pkgrepo set -s /export/repo publisher/prefix=tools # pkg set-publisher -g /export/repo toolsШаг 1, собираем пакет. Всю сборку пакетов я произвожу в $HOME/Projects/IPS/<имя пакета>, но это не принципиально. Кроме того, для сборки каждого пакета я запускаю отдельный терминал, чтобы не перепутались настройки окружения. Для сборки нам поребуется SunStudio cc или gcc. $ export PKGREPO=/export/repo $ mkdir -p $HOME/Projects/IPS/libssh2 $ cd $HOME/Projects/IPS/libssh2 $ export PKGROOT=`pwd` $ unset LDFLAGS $ PATH=$PATH:/opt/solarisstudio12.3/bin $ export CC=ccили $ export CC=gccНа этапе сборки пакета необходимо копировать (инсталлировать) пакет в ../root вместо /usr $ export DESTDIR=$PKGROOT/rootВ локальную директорию ../root я буду складывать собранный пакет. Окончательная инсталляция будет в /usr. $ [ -d root ] && rm -rf root $ cp ~/Скачивание/libssh2-1.4.2.tar.gz . $ tar xzf libssh2-1.4.2.tar.gz $ cd libssh2-1.4.2В случае, если пакет использует библиотеки из /usr/local/lib, устанавливаем LDFLAGS и _забываем_ про LD_LIBRARY_PATH $ export LDFLAGS="-L/usr/local/lib -R/usr/local/lib" $ ./configure $ gmake && gmake install $ cd ..Сборка пакета Python (pylibssh2) производится несколько иначе $ python setup.py install --root=../rootШаг 2. Готовим файл с описанием пакета $ cat > MANIFEST.files.mog << EOF set name=pkg.fmri value=library/libssh2@1.4.2,0.5.11-11 set name=pkg.description \     value="libssh2 is a client-side C library implementing the SSH2 protocol" set name=pkg.summary value="libssh2 library" set name=maintainer value="First Last <first.last@domain.com>" set name=info.upstream-url value=http://www.libssh2.org/ set name=variant.arch value=$(ARCH) license ../libssh2-1.4.2/COPYING license=BSD <transform dir path=usr$ -> edit group bin sys> EOF Где: library/libssh2 это название пакета, 1.4.2 версия пакета, 0.5.11 релиз, 11 номер сборки пакета. description это описание, а summary это короткое описание пакета. variant.arch для какой платформы собран пакет. Есть возможность в одном пакете иметь файлы для нескольких платформ, но это я делать пока не буду. license файл и тип лицензии transform необходим для того, чтобы в окончательном файле пакета была правильно выставлена группа владельца директории /usr Собираем список файлов пакета $ pkgsend generate root > MANIFEST.files.1 Добавляем информацию из файла описания и производим необходимые изменения $ pkgmogrify -DARCH=`uname -p` MANIFEST.files.1 MANIFEST.files.mog > MANIFEST.files.2 Генерим список всех зависимостей $ pkgdepend generate -md root MANIFEST.files.2 | pkgfmt > MANIFEST.files.3 Переводим список файловых зависимостей в список пакетов. Этот этап займет некоторое время. $ pkgdepend resolve -m MANIFEST.files.3 На выходе получаем готовый файл MANIFEST.files.3.res с описанием пакета.При желании можно проверить этот файл на предмет конфликтов с имеющимися репозитариями,прежде чем пакет будет окончательно опубликован. $ pkglint -c ../lint-cache -r http://pkg.oracle.com/solaris/release/ MANIFEST.files.3.res $ pkglint -c ../lint-cache-local -r /export/repo MANIFEST.files.3.res И собственно, публикуем пакет $ pkgsend publish -s $PKGREPO -d `pwd`/root MANIFEST.files.3.res Установка пакета и управление репозиториемЧтобы посмотреть какие пакеты есть репозитории $ pkgrepo list -s /export/repo/ Чтобы удалить устаревший пакет из репозитория $ pkgrepo remove -s /export/repo/ libssh2@1.4.2,0.5.11-8:* Чтобы посмотреть информацию о пакете в репозитории $ pkg info -r libssh2 Чтобы посмотреть как пойдет установка, без реальной установки пакета $ sudo pkg install -nv libssh2 Чтобы установить пакет $ sudo pkg install libssh2 Чтобы обновить пакет $ sudo pkg refresh $ sudo pkg update Список чтения:[1] How to Create and Publish Packages to an IPS Repository on Oracle Solaris 11,http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-097-create-pkg-ips-524496.html[2] Publishing your own packages with IPS - getting started.https://blogs.oracle.com/barts/entry/publishing_your_own_packages_with[3] How to create your own IPS packages (Ghost Busting)http://blogs.oracle.com/cwb/entry/how_to_create_your_own[4] Introduction to IPS for Developershttp://www.oracle.com/technetwork/systems/hands-on-labs/introduction-to-ips-1534596.html

У меня на ноутбуке стоит Solaris 11 и на нём в свободное время я пишу для себя некую программу для управления доменами Oracle VM Server for SPARC (aka LDoms). Пишу на Python/GTK/NetBeans. Но не это...

Oracle Solaris 11 уже скоро

9-го ноября состоится долгожданное и, надеюсь, грандиозное событие, посвященное выпуску Solaris 11. Для тех кто в городе Нью-Йорк и хочет прибыть лично читаем программу ниже и регистрируемся. Там же можно нужно зарегистрироваться на вэбкаст. (11 причин почему дата выпуска Oracle Solaris 11 не назначена на 11-11-11)Join Oracle executives Mark Hurd and John Fowler  and all key OracleSolaris Engineers and Execs at the Oracle Solaris 11 launch event inNew York, Gotham Hall on Broadway, November 9th and learn how you canbuild your infrastructure with Oracle Solaris 11 to:     * Accelerate internal, public, and hybrid cloud applications    * Optimize application deployment with built-in virtualization    * Achieve top performance and cost advantages with Oracle Solaris11–based engineered systems The launch event will also feature exclusive content for our in-personaudience including a session led by the VP of core Solaris developmentand his leads on Solaris 11 and a customer insights panel during lunch.We will also have a technology showcase featuring our latest systemsand Solaris technologies. The Solaris executive team will also be therethroughout the day to answer questions and give insights into futuredevelopments in Solaris. Don't miss the Oracle Solaris 11 launch in New York on November 9. REGISTER TODAY!

9-го ноября состоится долгожданное и, надеюсь, грандиозное событие, посвященное выпуску Solaris 11. Для тех кто в городе Нью-Йорк и хочет прибыть лично читаем программу ниже и регистрируемся. Там же мож...

Oracle Day 2011 - Москва

Уважаемые коллеги! Корпорация Oracle приглашает Вас принять участие в Деловом Инновационном форуме Oracle Day 2011, который состоится 02 ноября 2011 года в Москве, в гостинице Рэдиссон Славянская (пл. Европы, 2). Грандиозное мероприятие охватывает все направления бизнеса и продуктовые линейки корпорации – от программного обеспечения до аппаратных систем. Специальный гость форума Oracle Day 2011 – Эндрю Сазерлэнд, старший вице-президент Oracle в регионе EMEA, откроет пленарное заседание главным докладом о стратегии Oracle. Эксперты компании представят технологии и решения, только что анонсированные на крупнейшей международной конференции Oracle OpenWorld в Сан-Франциско, расскажут какие преимущества инновации могут дать Вашему бизнесу. На Oracle Day выступят топ-менеджеры корпораций и крупнейших предприятий России из государственного, финансового, телекоммуникационного секторов, промышленности, торговли, дистрибуции. Oracle Day – эксклюзивная площадка, где в течение одного дня можно познакомиться с полным портфолио Oracle для различных отраслей. Это флагманские комплексы Exadata & Exalogic, оптимизированные программные-аппаратные решения, новые версии баз данных, связующего ПО, аналитических решений, бизнес-приложений и серверов и систем хранения. Благодаря насыщенной программе, анонсам мировых премьер и возможности за один день познакомиться с полным стеком продуктов корпоративного класса от компании Oracle - Форум был и остается одним из ключевых событий в российской и мировой ИТ – индустрии. Зарегистрироваться можно: На сайте форума Oracle Day 2011 Не пропустите форум Oracle Day 2011 в Москве!

Уважаемые коллеги! Корпорация Oracle приглашает Вас принять участие в Деловом Инновационном форуме Oracle Day 2011, который состоится 02 ноября 2011 годав Москве, в гостинице Рэдиссон Славянская (пл....

Sun

Oracle Database Single Instance на ZFS: учимся готовить.

Oracle Database Single Instance на ZFS: учимсяготовить. Очень полезный и хорошийобзорразличных методов хранения для OracleDatabase сделал Дмитрий Волков. Однако естьнекоторые неточности касательно ZFS издесь я приведу правильные значения.Отдельно коснусь и проблемы фрагментации.Я предполагаю, что концепции ZFS вамхорошо знакомы. Если ваш выбор уже пална ZFS, то готовим ее так: Для начала, крайнерекомедую начитаться документации,прежде чем начать использовать ZFS (как,впрочем, и что-либо еще). Ищем whitepaper:«Configuring Oracle Solaris ZFS for an Oracle Database» ивнимательно читаем. Сейчас ее можновзять здесь.Еще читаемhttp://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases.В итоге имеем, что: Необходимо установитьпараметр recordziseкаждой файловой системы (ФС) еще досоздания на ней файлов данных. RecordsizeФС должен быть равен db_block_sizeбазы данных. В общем случае, данные ииндексы могут лежать на одной ФС сrecordsize=8k; redo, undo,temp и archivelog на других ФС с recordsize=128k.Пул дисков может быть общим для всехФС. Необходимо установтьпараметр logbias ФСв значение throughputдля файлов данных и в значение latencyдля redo. Желательно установитьпараметр primarycacheв значение metadataдля ФС с undo и archivelog. Таким образомотключить излишнее кеширование данных. Желательно ограничитьразмер кэша ФС в оперативной памятиустановив в файле /etc/system:set zfs:zfs_arc_max = 10737418240(например 10GB). Желательное значениевыбирается исходя из: общего размераоперативной памяти общего размераоперативной памяти минус размер SGA желательного минимумав ~2-4GB Наличие снимков(snapshots) и их количество в ZFS никак невлияет на производительность, так каккаждый снимок это всего лишь пространствона диске, занятое старыми версиямиблоков данных. При обновлении данных,снимки ни коим образом не трогаются. Необходимо отключатьпроверку целостности блока базойданных, так как это делает ZFS. Целостностьданных это фундаментальное свойствоZFS. О фрагментации в ZFS и скоростичтения и записи. Фрагментация данныхприводит к большему времени, необходимомудля чтения данных – увеличиваетсяlatency. Это происходит из-за того, что длячтения блоков, разбросанных по поверхностидиска, необходимо дождаться перемещенияголовок диска. Для ускорения операцийчтения в ZFS есть возможность подключениякэша на чтение (L2ARC). SunStorage F5100 Flash Array позволяет держать вкэше до 2TB данных и таким образом полностьюнивелировать задержку, возникающуюиз за фрагментации (описание ZFSL2ARC и тесты). Кроме того, ZFS пишетданные на диск группами (или транзакциямиZFS, не путать с транзакциями Oracle). Этоозначает, что группа одновременнозаписываемых блоков будет записaнa, повозможности, максимально плотно, а нехаотично поблоково разбросана. Чтогораздо быстрее выполнить, чем положитькаждый блок на свое место, при этомкаждый раз перемещая головки диска. Дляувеличения скорости записи так жежелательно использовать SunFlash Accelerator F20 PCIe Card. Это внутренняя PCIкарта позволяющая иметь 96GB кэша назапись (ZFS ZIL). + Дополнительныепреимущества того, что запись в ZFSпроизводится всегда на новое неиспользуемоеместо заключается в том, что физическийизнос поверхности диска распределяетсяболее равномерно. В случае же когдазапись производится всегда на одно ито же место приводит к быстрому физическомуизносу поверхности диска. + Для тех кто не готовпереходить на Oracle Database 11g R2, ZFS + F5100 + F20является единственным способом получитьподобие OracleFlashCache. За последний годсовместно с несколькими российскимиISV было проведено тестирование OracleDatabase на ZFS + F5100 + F20. Ни в одном случаеэффект фрагментации не был существеннени хоть сколько бы заметен. Если вашакомпания является ISV (производителемтиражируемого программногообеспечения) и хотите проверить работувашего приложения на Solaris10/ZFS/S7000,обращайтесь ко мне — сделаем проект ипроверим.

Oracle Database Single Instance на ZFS: учимся готовить. Очень полезный и хорошийобзор различных методов хранения для Oracle Database сделал Дмитрий Волков. Однако естьнекоторые неточности касательно...

Sun

Script to check CPU cores ownership on Oracle's Sun SPARC Enterprise T-Series systems

There are few scripts in my test lab which I am using while running tests. While most of them are very specific, there is one which may be of interest for you. This script is specific to T-series servers (T2 and T2 Plus processors) running Oracle VM Server for SPARC. This script allows you to verify if any CPU core is shared between two or more logical domains. If CPU threads from the same CPU core are assigned to different logical domains, then this can reduce the efficiency of these CPU threads. The reason for this is that single CPU core have memory cache shared between CPU threads belonging to this core. Cache Thrashing occurs when two programs are using cache for different memory pages and the load is very heavy. Lets check how it works: First, we have two guest domains configured and each guest domain have 16 CPU threads assigned to it. # ldm lsNAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIMEprimary          active     -n-cv-  SP      24    8000M    0.7%  1d 2h 27mldg1             active     -n----  5000    16    15996M   2.4%  1d 17mldg2             active     -n----  5001    16    15996M   8.1%  2d 2h 48m Cores are distributed between domains in the following way as script reports: # ./check_core_assignment.pl Core 0 used by primary Core 1 used by primary Core 2 used by primary Core 12 used by ldg2 Core 13 used by ldg2 Core 14 used by ldg1 Core 15 used by ldg1 Lets add 1 CPU thread to domain ldg1 and ldg2: # ldm add-vcpu 1 ldg1# ldm add-vcpu 1 ldg2# ldm lsNAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIMEprimary          active     -n-cv-  SP      24    8000M    0.4%  1d 2h 47mldg1             active     -n----  5000    17    15996M   2.4%  1d 37mldg2             active     -n----  5001    17    15996M   7.7%  2d 3h 8m And check what script will report: # ./check_core_assignment.pl Core 0 used by primary Core 1 used by primary Core 2 used by primary Core 3 used by ldg1 ldg2 MultiUsage detectedCore 12 used by ldg2 Core 13 used by ldg2 Core 14 used by ldg1 Core 15 used by ldg1 Oops, looks like some threads of core 3 assigned to domain ldg1 and some to domain ldg2. This might lead to performance impact. To avoid this situation always add/remove CPU threads in multiplies of 8 (number of CPU threads in CPU core on T2 and T2 Plus platform). The script itself:# cat check_core_assignment.pl #!/usr/bin/perl @AllCores = (); open(DOM, "ldm ls -p|") || die "failed to get domains"; while (<DOM>) {         if ( m/DOMAIN\\|name=([\^\\|]\*)/ )         {                 $domain = $1;                 open(CPU, "ldm ls-bindings -p $domain|") || die "failed to get cpus for $domain\\n";                 while (<CPU>)                 {                         if ( m/\\|vid=\\d\*\\|pid=(\\d\*)/ ) {                                 $core = int($1 / 8);                                 push (@AllCores, $core) unless $seen{$core}++;                                 push (@{$Usage[$core]}, $domain) unless $seen{$core, $domain}++;                         }                 }         } } foreach $c (sort {$a <=> $b} @AllCores) {         my $mu = 0;         print "Core $c used by ";         foreach $k (sort @{$Usage[$c]}) {                 print "$k ";                 $mu++;         };         if ($mu > 1) { print "MultiUsage detected"; }         print "\\n"; }

There are few scripts in my test lab which I am using while running tests. While most of them are very specific, there is one which may be of interest for you. This script is specific to T-series...

Sun

OpenSolaris build 128 now availble - zfs dedup in it

I've being waiting for this update to try deduplication of zfs. Beingknowing that a lot of files will be written as result of image update Iset zfs compressionincluding rpool file system (except swap and dump) before doing update.Zfs compression on my home files already being for one month and provedto save space/time/power. Here what I have with zfs compression after all. $ zfs get -r compressratioNAME                              PROPERTY       VALUE  SOURCErpool                             compressratio  1.14x  -rpool/ROOT                        compressratio  1.05x  -rpool/ROOT/b127 compressratio 1.93x -rpool/ROOT/b128a                  compressratio  1.05x  -rpool/roman                     compressratio  1.18x  - Now updating image. To get recent bits $ pfexec pkg image-update --be-name b128a Reboot in new boot environment crossing fingers ;) To update zfs to most recent version $ pfexec zpool upgrade rpool Remembering old problem I did in advance. Check device name of your zfs. $ pfexec installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t0d0s0 First hours impression: 1. time slider daemon is broken. Bug and fix here. 2. Firefox now 3.5.5 3. zfs now have deduplication. $ zfs get dedup rpool I am going to set it to sha256,verify. The reason for verify is that I am paranoid about my data. I do believe that: If you have files born as result of cp then it is fine to no verify. In this case you do believe that you have identical files. Inmy case data comes from different sources. If two blocks got the samesha256 then I do believe that it is collision. So I would like toverify if they are actually the same. $ pfexec zfs set dedup=sha256,verify rpool$ pfexec zfs set dedup=off rpool/swap$ pfexec zfs set dedup=off rpool/dump Time and Math will prove who is right. Anything that can possibly go wrong, does.

I've being waiting for this update to try deduplication of zfs. Being knowing that a lot of files will be written as result of image update I set zfs compressionincluding rpool file system (except...

Sun

Ordinary user impressions of OpenSolaris updated to build 126

Its being a while since my last blog. And since many projects still in progress I will share my impression about recent build of OpenSolaris b126. I am long time user of OpenSolaris. This is my tool for everyday work and I have it on my notebook. But still I am just a user, a little of admin, totally not a developer of this system. OpenSolaris 2009.06 was a major milestone with most every features polished to that date. But some was not as good as I wanted them to be. So I came a long road of updating to dev builds. I had all of them starting from 116 till 126. In every build there were improvements as well as new bugs. Only starting with 126 I can say that I feel good about it. So here is the list why go dev and what you should be ready to. I am not writing in which release bug was fixed. Consider this as a "diff 111a 126" and I am not pretend to show all diff, this is only what is sensitive to me: Firefox was beta in 2009.06 and bunch of neededaddons does not work in this beta. That was first reason why I started to godev. Probably I can go updating applications only, but I decided to have whole system recent. Firefox is 3.5.3 now. System monitor applet had disk activity broken, but now it is fine. Overwall system performance (Compiz at maximum visual, Firefox) performs much better. This is purely subjective. Sound subsystem has changed. Keyboard beep annoying my coworkers. Did not get how to mute it or lower volume permanently. Pidgin's yahoo icon problem is fixed. It was showing offline even when you connected. Pidgin is 2.6.2 In some builds printer monitor applet gone wild eating cpu. But now its fine. I noticed that my /var was growing. This is due to each 'pkg update' is storing downloaded packages. Can be fixed with $ pfexec pkg set-property flush-content-cache-on-success true No more Ctrl-D in Terminal, have to use "exit". Didn't look at it yet. Typing "exit" may be annoying, but it saves from accidental closing of terminals. Right now I have just 5 opened sessions in terminal, usually more. I have strange problem with Marvell Yukon Ethernet. Before 125 I had a driver from the vendor. But it does not work for build 125 and 126. So I decided to remove YUKONXsolx and give a try to yge. Something strange happening here. I have external USB disk drive attached. I use this drive periodically as rpool mirror to have a spare. As soon as I have this drive attached before the system booted my ethernet refuses to work. So I need to have USB disk drive detached when system boots. Details are here. Now waiting for zfs dedup Other frequently used applications: Rhythmbox with Audio.fm, Thunderbird, OpenOffice, QCAD, Gimp, Stellarium (Blastwave please update your repository with recent versions! and why it so rare online?), Adobe Reader.

Its being a while since my last blog. And since many projects still in progress I will share my impression about recent build of OpenSolaris b126. I am long time user of OpenSolaris. This is my tool...

Sun

Настройка GPRS модема в OpenSolaris

На моем ноутбуке до сих пор существуют две операционные системы. Первая — OpenSolaris build 118, в ней я постоянно работаю. И вторая — ее я использую для доступа в интернет на даче, а еще в ней установлен антивирус. Не первый раз возник вопрос, а нужна ли мне та вторая, может я смогу настроить модем в OpenSolaris? Немного нагуглив я нашел руководство по настройке Nokia E71 в Vodafone UK. Остается все это переделать для SonyEricsson K750i в МТС. приступаем, делаем ссылку для телефона и создаем файлы настроек. # ln -s /dev/term/0 /dev/se750i# touch /etc/ppp/options# cat /etc/ppp/mts-chat'' 'ATZ''OK' 'AT+CGDCONT=1,"IP","internet"''OK' 'ATD\*99#'CONNECT ''# cat /etc/ppp/peers/mtsmodemse750i115200noauthnoipdefaultdefaultrouteusepeerdnsnoccpnovjuser "mts"nodetachshow-passwordcrtsctsconnect "/usr/bin/chat -V -t15 -f /etc/ppp/mts-chat"# echo 'mts \* mts \*' >> /etc/ppp/pap-secrets# pppd call mtsATZOKAT+CGDCONT=1,"IP","internet"OKATD\*99#CONNECTSerial connection established.Using interface sppp0Connect: sppp0 <--> /dev/se750iLCP: Rcvd Code-Reject for Identification id 223Remote message: Congratulations!local IP address 10.21.25.201remote IP address 192.168.1.1primary DNS address 213.87.0.1secondary DNS address 213.87.1.1для обрыва сеанса просто Ctrl-C\^CTerminating on signal 2.Connection terminated.Connect time 8.6 minutes.Sent 78100 bytes (702 packets), received 22106 bytes (40 packets). Пока единственное неудобство - необходимость заполнять файл /etc/resolv.conf:# cat /etc/ppp/resolv.conf > /etc/resolv.conf Будем надеяться, что когда-нибудь pppd будет под контролем nwam.

На моем ноутбуке до сих пор существуют две операционные системы. Первая — OpenSolaris build 118, в ней я постоянно работаю. И вторая — ее я использую для доступа в интернет на даче, а еще в...

Sun

Running Oracle Real Application Clusters (RAC) on Sun Logical Domains (LDoms)

BluePrint of Running Oracle Real Application Clusters (RAC) on Sun Logical Domains is available now. In this blueprint you will find step-by-step configuration example of two nodes in two guest domains located on two separate boxes. Use it as starting point to build your own cluster. This article discusses running Oracle Real Application Clusters (RAC)on servers configured with Sun™ Logical Domains (LDoms). Sun LDomsvirtualization technology allows the creation of multiple virtualsystems on a single physical system, and enables fine-grainedassignment of CPU and memory resources to an Oracle RAC workload. Whendeployed on Sun's CoolThreads™ technology-based servers, with up to 256threads per system, this solution provides a powerful platform for bothdevelopment and production environments. In development environments,multiple Oracle RAC nodes can be deployed on the same physical serverto reduce hardware costs, while a production environment can place eachOracle node on a separate physical server for increased availability. The authors would like to thank Sridhar Kn, Ezhilan Narasimhan,Gia-Khanh Nguyen, Uday Shetty from Sun Microsystems, as well as RahimMau, Khader Mohiuddin from Oracle Corporation for their contributionsduring the certification of Oracle on Sun Logical Domains.

BluePrint of Running Oracle Real Application Clusters (RAC) on Sun Logical Domains is available now. In this blueprint you will find step-by-step configuration example of two nodes in two...

Sun

Configuring Oracle ASM in Solaris Container with Solaris Volume Manager.

Make sure that you have read the following posts before continuing with this post: Best Practices for Running Oracle Databases in Solaris Containers Configuration example of Oracle ASM on Solaris. Configuration example of Oracle ASM in Solaris Container. We have learned before how to configure Oracle ASM in Solaris Container. For this we setup our container with raw device access. While this works good it is not good idea to expose device names into container. Virtualized OS administrator should not know about devices in global container. And being attached to special device is not giving flexibility. In this exercise we will migrate from raw device onto SMV metadevice. First we will start with creating metadevice in global container. This metadevice will consist of single mirrror. Create a new device with enough space to hold your database, for example like this. # metastat d40 d40: Mirror Submirror 0: d41 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 2147450880 blocks (1023 GB) d41: Submirror of d40 State: Okay Size: 2147450880 blocks (1023 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare /dev/dsk/c4t600A0B80003391700000061F49A65117d0s0 0 No Okay Yes Device Relocation Information: Device Reloc Device ID /dev/dsk/c4t600A0B80003391700000061F49A65117d0 Yes id1,ssd@n600a0b80003391700000061f49a65117 Read more about Solaris Volume Manager. Then we need to allow our container to access this device. # zonecfg -z zone1zonecfg:zone1> add device zonecfg:zone1:device> set match=/dev/md/rdsk/d40 zonecfg:zone1:device> endzonecfg:zone1> verifyzonecfg:zone1> commitzonecfg:zone1> exit Container must be rebooted (# zoneadm -z zone1 reboot) to get access to this new device. Make sure that you have stopped all oracle processes in your container before rebooting. As soon as your container boots you may login to it and use new device for ASM (assuming that ASM instance is running). First, assing oracle:dba ownership in the same way as we did before. # cd /dev/md/rdsk # ls -lh total 0 crw-r----- 1 root sys 85, 40 Feb 27 04:40 d40 # chown oracle:dba d40 # ls -lh total 0 crw-r----- 1 oracle dba 85, 40 Feb 27 04:40 d40 Then connect to oracle ASM instance and attach this device to ASM diskgroup: $ ORACLE_SID=+ASM sqlplus / as sysdbaSQL\*Plus: Release 11.1.0.7.0 - Production on Sun Mar 1 07:27:02 2009 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> create diskgroup testdg external redundancy disk '/dev/md/rdsk/d40'; create diskgroup testdg external redundancy disk '/dev/md/rdsk/d40' \* ERROR at line 1: ORA-15018: diskgroup cannot be created ORA-15031: disk specification '/dev/md/rdsk/d40' matches no disks ORA-15014: path '/dev/md/rdsk/d40' is not in the discovery set By default /dev/md/rdsk in not in the search path for candidate devices. So we need to alter initialization parameter: SQL> show parameter asm_diskstring; NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskstring string SQL> alter system set asm_diskstring = '/dev/rdsk','/dev/md/rdsk'; System altered. SQL> show parameter asm_diskstring;NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskstring string /dev/rdsk, /dev/md/rdsk Before attaching to our current diskgroup try to create test group testdg. And if it will be ok then we will re-attach it to the current disk group. SQL> create diskgroup testdg external redundancy disk '/dev/md/rdsk/d40'; Diskgroup created. SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP; STATE NAME ----------- ------------------------------ MOUNTED DG1 MOUNTED TESTDG SQL> DROP DISKGROUP TESTDG; Diskgroup dropped. SQL> ALTER DISKGROUP DG1 ADD DISK '/dev/md/rdsk/d40'; Diskgroup altered. SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP; STATE NAME ----------- ------------------------------ MOUNTED DG1 As soon as you will create new disk you may see that oracle starts rebalancing disks: extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.2 41.8 0.8 42190.0 0.0 0.5 0.0 11.8 0 49 md/d40 0.2 41.8 0.8 42190.0 0.0 0.5 0.0 11.8 0 49 md/d41 41.6 43.2 42189.2 172.8 0.0 0.6 0.0 6.5 0 49 c4t600A0B8000562790000005D04998C446d0 0.2 330.2 0.8 42190.0 0.0 2.9 0.0 8.9 0 49 c4t600A0B80003391700000061F49A65117d0 Lets veirfy it: SQL> set linesize 132; SQL> column path format a18; SQL> column name format a10; SQL> column G# format 99; SQL> column D# format 99; SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; G# D# STATE REDUNDA NAME PATH TOTAL_MB FREE_MB USED_MB --- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 1 0 NORMAL UNKNOWN DG1_0000 /dev/rdsk/c4t600A0 1048567 838515 210052 B8000562790000005D 04998C446d0s0 1 1 NORMAL UNKNOWN DG1_0001 /dev/md/rdsk/d40 1048560 1039786 8774 40 minutes later: SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; G# D# STATE REDUNDA NAME PATH TOTAL_MB FREE_MB USED_MB --- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 1 0 NORMAL UNKNOWN DG1_0000 /dev/rdsk/c4t600A0 1048567 938606 109961 B8000562790000005D 04998C446d0s0 1 1 NORMAL UNKNOWN DG1_0001 /dev/md/rdsk/d40 1048560 939695 108865 And finally we can drop first disk. SQL> alter diskgroup dg1 drop disk dg1_0000; Diskgroup altered. SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; G# D# STATE REDUNDA NAME PATH TOTAL_MB FREE_MB USED_MB --- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 1 0 DROPPING UNKNOWN DG1_0000 /dev/rdsk/c4t600A0 1048567 939145 109422 B8000562790000005D 04998C446d0s0 1 1 NORMAL UNKNOWN DG1_0001 /dev/md/rdsk/d40 1048560 939156 109404 Wait for oracle to complete evacuating data: extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.2 81.8 0.8 40515.8 0.0 0.7 0.0 8.8 0 60 md/d40 0.2 81.8 0.8 40515.8 0.0 0.7 0.0 8.8 0 60 md/d41 39.2 1.0 40141.4 4.0 0.0 0.5 0.0 13.1 0 52 c4t600A0B8000562790000005D04998C446d0 0.2 357.6 0.8 40515.8 0.0 2.9 0.0 8.0 0 60 c4t600A0B80003391700000061F49A65117d0 SQL> select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; G# D# STATE REDUNDA NAME PATH TOTAL_MB FREE_MB USED_MB --- --- -------- ------- ---------- ------------------ ---------- ---------- ---------- 0 0 NORMAL UNKNOWN /dev/rdsk/c4t600A0 0 0 0 B8000562790000005D 04998C446d0s0 1 1 NORMAL UNKNOWN DG1_0001 /dev/md/rdsk/d40 1048560 829745 218815 From the global container we will detach initial raw device leaving SMV managed one only attached to container. # zonecfg -z zone1 zonecfg:zone1> remove device match=/dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 zonecfg:zone1> verify zonecfg:zone1> commit zonecfg:zone1> exit Again reboot container while making sure that oracle processes were shutdown properly. From now you have metadevice attached to container. From global container you can attach second mirror to this metadevice. Also you can move metadevice between arrays. And you have advantage of your container does not care about devices constituting this metadevice.

Make sure that you have read the following posts before continuing with this post: Best Practices for Running Oracle Databases in Solaris Containers Configuration example of Oracle ASM on Solaris. Configu...

Sun

Configuration example of Oracle ASM in Solaris Container.

Configuration example of Oracle ASM in Solaris Container. In this post I will give you a tip on how to setup Oracle ASM in Solaris Container. The main point of container's configuration is to set proper privileges. Your container should have proc_priocntl (must) and proc_lock_memory (highly recommended) privileges in order to function properly with ASM in it. Use the following as an configuration example when creating container and adjust it for your needs. Please read comments inlined: create # container will be named zone1 # make sure that directory /zones exist and have permissions 700 set zonepath=/zones/zone1 set autoboot=true set limitpriv=default,proc_priocntl,proc_lock_memory set scheduling-class="FSS" set max-shm-memory=16G # use ip-type exclusive at your wish, non-exclusive is also possible set ip-type=exclusive add net set physical=e1000g1 end add fs set dir=/usr/local # make sure /opt/zone1/local exist set special=/opt/zone1/local set type=lofs end add fs # mount /distro from global zone into container. # I have Oracle distribution files there set dir=/distro set special=/distro set type=lofs add options [ro] end # this device will be used for ASM inside container add device set match=/dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 end # permit to use 16 cpu within container add dedicated-cpu set ncpus=16-16 end verify commit Put this confing into zone1.txt and edit this file to adjust your configuration. Then create, install and boot container. # zonecfg -z zone1 -f zone1.txt # zoneadm -z zone1 install # zoneadm -z zone1 boot When you are done login to newly created container and proceed with installing Oracle and configuring ASM. Tips: Since I have dual-FC connected 6140 array I have it configured with Solaris I/O multipathing feature enabled # stmsboot -D fp -e I really like to use VNC to access my lab remotely

Configuration example of Oracle ASM in Solaris Container. In this post I will give you a tip on how to setup Oracle ASM in Solaris Container. The main point of container's configuration is to set...

Sun

Oracle Database 10g и Real Application Cluster сертифицированы с Sun Logical Domains (LDoms)

Oracle Database 10g и Real Application Cluster (RAC) отныне официально сертифицированы и поддерживаются на Sun Logical Domains (LDoms). Детали доступны так же в technology support matrix. Используя технологию виртуализации LDoms администратор максимально гибко управляет ресурсами системы и получает максимальный уровень безопасности, одновременно увеличивая уровень использования имеющихся серверов. Среди множества возможных конфигураций я хотел бы специально отметить возможность создания кластера из нескольких нод, расположенных на одном единственном физическом сервере. Начните создавать свой кластер на чипе прямо сейчас. Ожидайте, скоро тут появится пример конфигурации.

Oracle Database 10g и Real Application Cluster (RAC) отныне официально сертифицированы и поддерживаются на Sun Logical Domains (LDoms). Детали доступны так же в technology support matrix. Используя...

Sun

Configuration example of Oracle ASM on Solaris.

Update: Instructions in this post are not valid for Oracle 11gR2. In 11gR2 ASM is not a part of database as it moved into Grid. In this example you will see how to configure Oracle ASM on Solaris. The following system will be used in this example: Sun T5220 with attached 6140 array, Solaris 10 (10/08) and Oracle 11. Briefly, Oracle Database installation steps can be performed as five separate steps: Install Oracle binaries only Install Oracle patchset Configuring ASM Configuring Listener Database Creation In this post I will show you the steps necessary to create ASM (step 3). Each step is accomplished with it's screenshot. Start dbca and choose “Configure Automatic Storage Management” $ dbca Before using ASM you will be asked to configure Oracle Cluster Synchronization Service (CSS). Login as root at execute required script: bash-3.00# /oracle/database/product/10.2.0/bin/localconfig add Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configuration for local CSS has been initialized Cleaning up Network socket directories Setting up Network socket directories Adding to inittab Feb 16 02:04:24 rac10 root: Oracle Cluster Synchronization Service starting by user request. Startup will be queued to init within 30 seconds. Checking the status of new Oracle init process... Expecting the CRS daemons to be up within 600 seconds. Feb 16 02:04:24 rac10 root: Cluster Ready Services completed waiting on dependencies. Cluster Synchronization Services is active on these nodes. rac10 Cluster Synchronization Services is active on all the nodes. Oracle CSS service is installed and running under init(1M) After the script will complete continue to next step and enter desired password for ASM instance. I choose 'oracle'. Proceed with ASM instance creation. Now we can check that ASM instance is running: bash-3.00# ps -ef|grep ASM oracle 8438 1 0 02:10:09 ? 0:00 asm_vktm_+ASM oracle 8719 8378 0 02:12:08 ? 0:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) oracle 8435 1 0 02:10:09 ? 0:00 asm_pmon_+ASM oracle 8446 1 0 02:10:09 ? 0:00 asm_dia0_+ASM oracle 8442 1 0 02:10:09 ? 0:00 asm_diag_+ASM oracle 8444 1 0 02:10:09 ? 0:00 asm_psp0_+ASM oracle 8448 1 0 02:10:09 ? 0:00 asm_mman_+ASM oracle 8450 1 0 02:10:09 ? 0:00 asm_dbw0_+ASM oracle 8452 1 0 02:10:09 ? 0:00 asm_lgwr_+ASM oracle 8454 1 0 02:10:09 ? 0:00 asm_ckpt_+ASM oracle 8456 1 0 02:10:09 ? 0:00 asm_smon_+ASM oracle 8458 1 0 02:10:09 ? 0:00 asm_rbal_+ASM oracle 8460 1 0 02:10:09 ? 0:00 asm_gmon_+ASM root 8833 734 0 02:13:03 console 0:00 grep ASM When instance is up we need to create disk group(s). If you will choose “Create New” then you may see that there are no candidate disks for new disk group. We need to set oracle:dba ownership of the candidate disk. As root we will take partition s0 of disk c4t600A0B8000562790000005D04998C446d0. bash-3.00# format Searching for disks...done c4t600A0B8000562790000005D04998C446d0: configured with capacity of 1024.00GB AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@0/pci@0/pci@2/scsi@0/sd@0,0 1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /pci@0/pci@0/pci@2/scsi@0/sd@1,0 2. c4t600A0B80003391700000060A49963C06d0 <SUN-CSM200_R-0710-1.00TB> /scsi_vhci/ssd@g600a0b80003391700000060a49963c06 3. c4t600A0B8000562790000005D04998C446d0 <SUN-CSM200_R-0710-1.00TB> /scsi_vhci/ssd@g600a0b8000562790000005d04998c446Specify disk (enter its number): 3 selecting c4t600A0B8000562790000005D04998C446d0 [disk formatted] Disk not labeled. Label it now? y FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels inquiry - show vendor, product and revision volname - set 8-character volume name ! - execute , then return quit format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk ! - execute , then return quit partition> p Current partition table (original): Total disk sectors available: 2147467230 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 34 1023.99GB 2147467230 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 2147467231 8.00MB 2147483614 partition> Ctrl-D Default owner is root:sys needs to be changed to oracle:dba bash-3.00# ls -lhL /dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0crw-r----- 1 root sys 118, 64 Feb 16 02:10 /dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 bash-3.00# chown oracle:dba /dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 bash-3.00# ls -lhL /dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0crw-r----- 1 oracle dba 118, 64 Feb 16 03:00 /dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 Going back to candidate disks and now we have our disk in the list. Enter desired name for disk group and mark candidate disk. I will use External redundancy since I have external array. Continue and finish ASM creation. To check disk group status and size quickly use the following asm.sql query: -bash-3.00$ cat asm.sql set linesize 132; column path format a58; column name format a10; column G# format 99; column D# format 99; select group_number G#, disk_number D#, state, redundancy, name, path, total_mb, free_mb, (total_mb - free_mb) used_mb from v$asm_disk; select name, total_mb, free_MB from v$asm_diskgroup; exit -bash-3.00$ ORACLE_SID=+ASM sqlplus / as sysdba @asm SQL\*Plus: Release 11.1.0.7.0 - Production on Mon Feb 16 03:11:21 2009 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options G# D# STATE REDUNDA NAME PATH TOTAL_MB FREE_MB USED_MB --- --- -------- ------- ---------- ---------------------------------------------------------- ---------- ---------- ---------- 1 0 NORMAL UNKNOWN DG1_0000 /dev/rdsk/c4t600A0B8000562790000005D04998C446d0s0 1048567 1048508 59 NAME TOTAL_MB FREE_MB ---------- ---------- ---------- DG1 1048567 1048508 Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Now you can move to database creation and using ASM storage. Setup listerner by running $ netca Create database by running $ dbca

Update: Instructions in this post are not valid for Oracle 11gR2. In 11gR2 ASM is not a part of database as it moved into Grid. In this example you will see how to configure Oracle ASM on Solaris. The...

Sun

Best Practices for Running Oracle Databases in Solaris Containers

“Best Practices for Running Oracle Databases in Solaris Containers” now available on Sun/BluePrints site. This document describes Solaris Containers features for using with Oracle databases. You will learn how to setup Container and assign resources to it (scheduler, CPU and memory capping). It tells you which privilege gives you ability to use Dynamic Intimate Shared Memory (DISM) with Oracle. You may find how to setup Container so it will have it's own IP stack. Mounting UFS and ZFS filesystems, devices in Containers and System V Resource Controls also covered. Summary Solaris Containers provide a very flexible and secure method of managing multiple applications on a single Solaris OS instance. Solaris Containers use Solaris Zones software partitioning technology to virtualize the operating system and provide isolated and secure runtime environments for applications. Solaris Resource Manager can be used to control resource usage, such as capping memory and CPU usage,helping to ensure workloads get required system resources. By utilizing Solaris containers, multiple applications, or even multiple instances of the same application, can securely coexist on a single system, providing potential server consolidation savings. Oracle 9i R2 and 10g R2 databases have been certified to run in a Solaris container. This paper provides step-by-step directions for creating a non-global zone in a Solaris container that is appropriate for running a non-RAC Oracle database. In addition, it describes special considerations that apply when running an Oracle database within a Solaris container.

“Best Practices for Running Oracle Databases in Solaris Containers” now available on Sun/BluePrints site. This document describes Solaris Containers features for using with Oracle databases. You will...

Sun

Monitoring ZFS Statistic on the system loaded with Oracle database

Nothing happens without a reason. My previous post about Monitoring ZFS Statistic is not exception. This week I had T5240 (128 threads, 64GB memory, Solaris 10 10/08 aka U6) with 6140 attached and not loaded with other my projects. So I decided to load the boxes with Oracle on ZFS (binaries, datafiles, logs) and run some tests. Not a benchmarking. Just to check if this can live and breathe. I also wanted to check the battle for the memory between Oracle SGA (set to 32GB), 3800 processes connecting to database (taking approx. 5MB not shared memory each) and ZFS. On storage I have made single 1TB RAID1 LUN and put ZFS on it. No data/logs of ZFS separation. Keeping it simple. root@rac05#zpool listNAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOToracle  1016G   243G   773G    23%  ONLINE  - Arcstat script was modified a little further (lines 82, 177, 229 and 230) so I don't have too many zeros in statistics by going up from bytes to MB. Download new arcstat script and description file for dimstat. 81 my $version = "0.1"; 82 my $mega = 1024 \* 1024; 83 my $cmd = "Usage: arcstat.pl [-hvx] [-f fields] [-o file] [interval [count]]\\n";...171 sub prettynum {172 my @suffix=(' ','K', 'M', 'G', 'T', 'P', 'E', 'Z');173 my $num = $_[1] || 0;174 my $sz = $_[0];175 my $index = 0;176 return sprintf("%s", $num) if not $num =~ /\^[0-9\\.]+$/;177 return sprintf("\*%d", $sz, $num);178 while ($num > 1000 and $index < 8) {179 $num = $num/1000;180 $index++;181 }182 return sprintf("%\*d", $sz, $num) if ($index == 0);183 return sprintf("%\*d%s", $sz - 1, $num,$suffix[$index]);184 }...200 sub calculate {...229 $v{"arcsz"} = $cur{"size"}/$mega;230 $v{"c"} = $cur{"c"}/$mega;...238 } On the picture you may see results of monitoring: At 1:01 test was started. It took 1 hour until all processes become ready (1 process was started each 1 second). Then Oracle database was busy between 2:04 and 2:53. As you can see ZFS is taking more memory if it can, but it stays "polite" by freeing this memory for other processes when it needed. I will be glad to hear from you if you have any thoughts and experience running Oracle Database on ZFS. Thanks to Valery for his Perl Syntax highlighting plugin for Netbeans.

Nothing happens without a reason. My previous post about Monitoring ZFS Statistic is not exception. This week I had T5240 (128 threads, 64GB memory, Solaris 10 10/08 aka U6) with 6140 attached and not...

Sun

Monitoring ZFS Statistic

By combining two great tools arcstat and dimstat you can get ZFS statistics in: table view chart view any date/time interval host to host compare For example, online table and chart view The following easy steps are needed to integrate arcstat into dimstat. I am suggesting that you are already happy user of dimstat and ZFS. Download arcstat.pl script from this page. Modify arcstat.pl script so it will print numerical values instead of pretty numbers (20000000000 versus 20G). This numbers will be inserted into database. This is why we need exact numbers. In the script locate prettynum function and add one line (bold one): sub prettynum { my @suffix=(' ','K', 'M', 'G', 'T', 'P', 'E', 'Z'); my $num = $_[1] || 0; my $sz = $_[0]; my $index = 0; return sprintf("%s", $num) if not $num =~ /\^[0-9\\.]+$/;return sprintf("%d", $num); while ($num > 1000 and $index < 8) { $num = $num/1000; $index++; } return sprintf("%\*d", $sz, $num) if ($index == 0); return sprintf("%\*d%s", $sz - 1, $num,$suffix[$index]);} Save modified script in /etc/STATsrv/bin/ on the monitored server. Give execute permissions to the script: chmod +x /etc/STATsrv/bin/arcstat.pl Register this script in dimstat “ access to execute” database. Add this line in the end of /etc/STATsrv/access file: command arcstat /etc/STATsrv/bin/arcstat.pl Download description file arcstat.desc. Login to dimstat server and choose “ Add-On STAT(s)”: Choose “Restore Add-On STAT(s) Description”: Choose “From Local File on your disk” and upload description file acrstat.desc. From now you can go to dimstat Home and “Start New Collect”. arcstat will appear in the list of available STAT(s). Wait a few minutes before results will appear in Analyze section. This modification will allow you to monitor 11 basic values provided by arcstat by default. At most arcstat gives all 30 values from kstat -m zfs.

By combining two great tools arcstat and dimstat you can get ZFS statistics in: table view chart view any date/time interval host to host compare For example, online table and chart view The following easy...

Sun

dimSTAT в примерах

Oracle runs best on Sun series. На прошлой неделе мне довелось побывать на семинаре Oracle RAC DD4D(deep dive for developers), организованном компанией Oracle в Москве. Семинар крайне интересный и полезный, рекомендую! Огромное спасибо Дмитрию и всем докладчикам! В числе обсуждённых тем была тема производительности. А конкретно производительности кластера Oracle RAC. Это даже не вопрос, что системные администраторы, администраторы баз данных и пользователи все хотят иметь максимально быструю систему. Но прежде чем приступать к настройке системы необходимо знать, что же с ней происходит. От пользователей вы можете услышать, что работает недостаточно хорошо. В руках администраторов базы есть как минимум стандартные средства мониторинга от Oracle (Enterprise Manager). А что же в руках системного администратора? Множество \*stat команд системы, выдающих столбцы цифр? Что если нет опыта работы с командами \*stat? Не все помнят ключи запуска этих команд. Необходимо иметь информацию по длительному промежутку времени. В таком случае поможет dimSTAT. В этом посте я покажу несколько картинок из dimSTAT на четырёхнодовом кластере Oracle RAC на Solaris 10. Основные возможности dim_STAT: web интерфейс Вся собираемая статистика хранится в одном месте Возможность одновременно смотреть насколько собранных статистик Интерактивное (Java) или статическое (PNG) представление Мониторинг в реальном времени Мониторинг нескольких серверов одновременно Анализ по любому промежутку времени Возможность добавления собственных расширений (Add-On) Автоматизация действий Закладки к выбранным статистикам Здесь вы найдете несколько примеров того, что можно увидеть с помощью dimSTAT. Показанные графики взяты с разных тестов и не коррелируют между собой. Все графики по нескольким нодам, вы же можете смотреть по отдельных хостам и с необходимым уровнем детализации.Использование CPU в пользовательском контексте. Использование CPU  в системном и в системном+пользовательском контекстах. Чтение+запись на диск и Сеть In+Out Детализация по количетсву сетевых пакетов In и Out:

Oracle runs best on Sun series. На прошлой неделе мне довелось побывать на семинаре Oracle RAC DD4D(deep dive for developers), организованном компанией Oracle в Москве. Семинар крайне интересный и...

Sun

dimSTAT by examples

Oracle runs best on Sun series. Last week I was attending Oracle RAC DD4D (deep dive for developers) seminar here in Moscow, Russia. No need to say how useful and full of information this seminar was. Thanks to Dmitry and all presenters! Among many things we covered performance. Specifically it was Oracle RAC performance. System administrators, DBA and Users all wants to have better and faster system. But before you try to make your system better you have to take a look at your system from all possible angles. Users can tell you about their's personal feelings: “Oh, my program A works so slow”. In DBA's hands there are powerful tools from Oracle and other vendors. But what is in sysadmin's hands? Is it still \*stat commands producing rows of data? What if you don't have experience looking into \*stat commands output and hardly can remember theirs syntax and even names? In that case all you need is dimSTAT. In this post I will show some screenshots of dimSTAT monitoring four Solaris 10 nodes cluster running Oracle RAC. The main features of dim_STAT are: A web based user interface All collected data is saved in a central database Multiple data views Interactive (Java) or static graphs (PNG) Real Time monitoring Multi-Host monitoring Post analyzing Statistics integration (Add-On) Professional reporting with automated features One-click STAT-Bookmarks Here are few examples of what can you see with dimSTAT. All screenshots shown here were taken from different times, so most of them does not correlate. Although shown examples are multi-host you may do analysis per hosts with any details you need. CPU usage in User context: CPU Usage in System and User+System: Storage Read+Write and Network In+Out: Next two charts will show you network packets In and Out:

Oracle runs best on Sun series. Last week I was attending Oracle RAC DD4D(deep dive for developers) seminar here in Moscow, Russia. No need to say how useful and full of information this seminar was....

Sun

Quick and easy way to setup Guest Logical Domains using ZFS clone.

Question: I want to install more than one guest domain. Is there any way to speedup process. Answer: Yes, sure. This instruction can be used as supplimental to «Run your first Logical Domain in 10 minutes» as well as a separate guide. First setup the Control Domain and install the First Guest Domain ldg1. Use ZFS volume (In example, data/demo/ldg1) as back end drive for the system drive of guest domain. You can install Solaris either through a Network Install or by mounting a DVD or ISO file. Log in to the First guest domain Console and do 'sys-unconfig' in the domain. # sys-unconfigWARNINGThis program will unconfigure your system. It will cause itto revert to a "blank" system - it will not have a name or knowabout other systems or networks.This program will also halt the system.Do you want to continue (y/n) ? y Wait for the system to come down and answer h — halt. svc.startd: The system is down. syncing file systems... done Program terminated r)eboot, o)k prompt, h)alt? h From Control domain verify that guest domain ldg1 is stopped. # ldm ls NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv SP 4 4G 0.5% 1h 30m ldg1 bound ----- 5000 14 1984M Unbind domain # ldm unbind ldg1 Create a ZFS snapshot of the First Guest Domain's disk image: # zfs snapshot data/demo/ldg1@install Bind and Start domain back # ldm bind ldg1 # ldm start ldg1 Clone the snapshot using the name ldg2 as the target volume # zfs clone data/demo/ldg1@install data/demo/ldg2 Setup 2 nd Guest domain by using newly created ZFS clone data/demo/ldg2 as back-end to system disk. Repeat steps 8 and 9 for each next domain.

Question: I want to install more than one guest domain. Is there any way to speedup process. Answer: Yes, sure. This instruction can be used as supplimental to «Run your first Logical Domain in 10...

Sun

О подразделении ISV Engineering в Sun Microsystems в котором я работаю.

Немного о подразделении ISV Engineering в Sun Microsystems в котором я работаю. (вольный перевод публикации Daniel Powers, ISV Engineering) Наша команда помогает независимым разработчикам ПО (ISV) и оборудования (IHV), разработчикам программ с открытым кодом (OSS), а так же начинающим командам (startups) в том, чтобы они получали максимальное преимущество от использования технологий Sun Microsystems. Мы выполняем следующие проекты: Использование технологий (например OpenSolaris ) Обучение технологиям (пр. Mobile Ajax at SunTech Days ) Работаем с сообществами Разработчиков Открытого Программного обеспечения (пр. SugarCRM ) Настройка производительности и создание рекомендованных конфигураций (пр. Helios UB+ Sizing & Tuning ) Статьи и обзоры (пр. Accelerate Application Builds with DMake ) Измерение производительности (пр. SAP world records ) Программа поддержки разработчиков ( SPA ) (т.е. Компания ISV может получить бесплатную поддержку на сайте или просто отправив свой вопрос на адрес isvsupport@sun.com) Поддержка ISV и стартапов через программу хостинга (пр. Amazon EC2 for OpenSolaris ) Огромное количество блогов и статей по интересующим вас технологиям (пр. Herrington ... Rajadurai ... Startup Team ... Kumar ... Zhu( 中文 ) ... Drapeau ... van den Berg ... Mandalika ... Hasham ... India Open Source ... Glore ... Chen ... Jignesh ... Langston ... Shankar ... MShetty ... Mithun ... Reid ... van den Boogard ... Hosanee ... Praneet ... Prashant ... Lor ... Nair ... Sidharth ... Schneider et. al. ... Daly ... Vanga ... ChrisZhu ... )

Немного о подразделении ISV Engineering в Sun Microsystems в котором я работаю. (вольный перевод публикации Daniel Powers, ISV Engineering) Наша команда помогает независимым разработчикам ПО (ISV)...

Sun

Run your first Logical Domain in 10 minutes.

Here you will find the script which will help you to run your first Logical Domain in 10 minutes. I am suggesting that you have fresh system, without any Ldoms running and no Ldoms software installed. Backup your data before starting. Checklist: Do you have spare server with Niagara chip(s) inside? Check your server's room. It runs latest firmware? Check Sun System Handbook at SunSolve It runs Solaris 10 U5 or later? cat /etc/release Do you have ZFS pool with at least 10GB of free space? Check by running zpool list If you answered YES to all questions above then do this free steps: Download Logical Domains Manager 1.0.3 from http://www.sun.com/servers/coolthreads/ldoms/get.jsp and unzip it in the current directory. Download setup script from the ldmc.sh into current directory. Run script as root: # sh ./ldmc.sh (Novice mode, CPU count and amount of Memory assigned by script) or # sh ./ldmc.sh -e (Expert mode, you will be prompted to choose CPU count and amount of Memory for each domain) 1 st Run: Creating Control Domain # sh ./ldmc.sh Copyright 2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Installation of <SUNWldm> was successful. You may notice the following messages on console. Ignore them. Sep 7 14:16:34 dhcp164 vntsd[28471]: Error opening VCC device control port: /devices/virtual-devices@100/channel-devices@200/virtual-console-concentrator@0:ctl Sep 7 14:16:35 dhcp164 vntsd[28477]: Error opening VCC device control port: /devices/virtual-devices@100/channel-devices@200/virtual-console-concentrator@0:ctl Sep 7 14:16:35 dhcp164 vntsd[28482]: Error opening VCC device control port: /devices/virtual-devices@100/channel-devices@200/virtual-console-concentrator@0:ctl Sep 7 14:16:35 dhcp164 svc.startd[7]: ldoms/vntsd:default failed: transitioned to maintenance (see 'svcs -xv' for details) LDoms Manager already installed. Please create your control domain. When creating Control domain you will be asked for the primary network interface of the system. This is usually the interface to which you are connecting and which have default route via it. This interface will be changed to virtual switch vsw0 by the script. Please confirm your primary interface [e1000g0]. Enter name if different. e1000g0 #If guessing in the line above is right then just press Enter. The question here is which interface is used for default route. factory-default [current] Found 32 HW threads in 8 cores. 4 strands per core. --- Installing services. Ignore notices ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Any configuration changes made will only take effect after the machine configuration is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Any configuration changes made will only take effect after the machine configuration is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Any configuration changes made will only take effect after the machine configuration is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ --- Configuring control domain. Ignore notices ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Any configuration changes made will only take effect after the machine configuration is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Any configuration changes made will only take effect after the machine configuration is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Any configuration changes made will only take effect after the machine configuration is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ --- Saving config --- Changing primary interface e1000g0 to vsw0 to be able to have network between domains Then you will be prompted to enter ZFS pool name. ZFS volumes will be created on this pool for each Guest Logical Domain. This volumes are back end storage for the system disk of each domain. Confirm system reboot. After reboot your system will turn into control domain with spare CPUs and memory for additional guest domains. Please choose zpool where you want to place domain's virtual disk image file. NAME AVAIL distr 30.9G distr#You must enter zpool name where virtual disk will be stored. Will create distr/demo file system for guest domains. Reboot is needed now Choose [y/n] to reboot y Rebooting now..... You have new mail in /var/mail//root 2 nd run: Creating Guest Domain On the second run of the script you will create guest domain. Before creating guest domain you need to decide from which media you will install Solaris into newly created domain. This can be Network Installation, installation from DVD or from ISO image. You will be asked about proportion of free CPU and Memory to give to new domain, back end disk size and Installation method. # sh ./ldmc.sh creating ldg1 You already have the following domains configured NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv SP 4 4G 37% 6m The following re sources are left unused: CPU: 28, Memory: 3968 MB Considering this resources, how many (N) additional do mains do you want to create? On the next step I will create a domain taking 1/N of resources Please choose any number from 1 to 28 4 Mustuse boot-net or boot-cdrom NAME AVAIL distr 25.3G Pleaseenter how many GB should be dedicated to domain ldg1 7 Pleasechoose Solaris 10 media type: 1 for Network Install, 2 for DVD or ISOimage. (1|2) 1 Domain ldg1 created Run 'telnet localhost 5000' in another shell to get console, see booting messages and follow standard Solaris configuration wizard Press Enter when ready to start domain In another shell run telnet as shown above. Press Enter to continue the script LDom ldg1 started In telnet localhost 5000 session you will see the guest domain booting and then the Solaris Installation Wizard will appear. Tip: If you want to create 2 equal guest domains, you have to run the script twice. The first time choosing number 2 (taking ½ of resources), the second time choosing 1 (taking all resources which a left unused). Tip: If you want to have 4 guest domains then you should start with 4, then 3, 2 and 1. Tip: If you want to choose CPU count and memory by yourself then run script in Expert mode sh ./ldmc.sh -e. Looking for more? I've not used LDoms before, so I'm not too sure how much time will be needed to configure the systems how we want. I want to install more than one guest domain. Is there any way to speedup process? I need help how to install from DVD or ISO. Stay tuned... Worth manuals to read: Beginners Guide to Ldoms http://www.sun.com/blueprints/0207/820-0832.pdf

Here you will find the script which will help you to run your first Logical Domain in 10 minutes. I am suggesting that you have fresh system, without any Ldoms running and no Ldoms software installed....

Sun

Обновив версию zpool можно потерять GRUB

Для тех кто любит обновляться до последней версии будет полезно иметь в виду, что сделав только zpool upgrade, при последующей перезагрузке, а точнее при включении компа, вместо рабочего GRUB получите лишь его prompt. Например, я прошел путь OpenSolaris: 86, 96, 97 и теперь 98. Между билдами 86 и 98 версия ZFS pool сначала увеличилась с 11 до 12, а потом с 12 до 13. Я бы не обратил внимания, но к моему ноуту я периодически подключаю внешний USB диск (Lacie 250GB) для целей бекапа и смотрю, что говорит zpool status. Именно он и уговорил меня оба раза сделать zpool upgrade. И оба раза я наступал на эти грабли. Проблема в том, что при обновлении zpool upgrade необходимо обновлять и загрузчик GRUB. Бага уже принята в работу. Если Вы не сделали обновление загрузчика, то можете попытаться руками ему сказать откуда грузиться, но это не поможет: GRUB> bootfs rpool/ROOT/opensolaris-2GRUB> kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS....Error 17: cannot mount selected partition Поэтому качаем DVD с новым релизом и с него делаем installgrub: jack@opensolaris:~/Desktop$ su - Password: opensolaris Sun Microsystems Inc. SunOS 5.11 snv_98 November 2008 bash-3.2# zpool status no pools available bash-3.2# zpool import pool: rpool id: 1988807281359054800 state: ONLINE status: The pool was last accessed by another system. action: The pool can be imported using its name or numeric identifier and the '-f' flag. see: http://www.sun.com/msg/ZFS-8000-EY config: rpool ONLINE mirror ONLINE c5t0d0s0 ONLINE c3t0d0s0 ONLINE bash-3.2# zpool import -f rpool bash-3.2# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t0d0s0 Updating master boot sector destroys existing boot managers (if any). continue (y/n)? y stage1 written to partition 1 sector 0 (abs 96406065) stage2 written to partition 1, 267 sectors starting at 50 (abs 96406115) stage1 written to master boot sector bash-3.2# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t0d0s0 Updating master boot sector destroys existing boot managers (if any). continue (y/n)? y stage1 written to partition 0 sector 0 (abs 16065) stage2 written to partition 0, 267 sectors starting at 50 (abs 16115) stage1 written to master boot sector bash-3.2# zpool export rpool bash-3.2# init 6

Для тех кто любит обновляться до последней версии будет полезно иметь в виду, что сделав только zpool upgrade, при последующей перезагрузке, а точнее при включении компа, вместо рабочего GRUB получите...

Sun

zpool upgrade broke GRUB

Need to say that I have OpenSolaris on my Sony Vaio laptop. And I am following newest releases of OpenSolaris: 86, 96, 97 and now 98. In between builds 86 and 98 ZFS pool version was increased twice from 11 to 12 and from 12 to 13. I am paying attention to zfs status because I have external USB drive attached (Lacie 250GB) for mirror (backup) purposes. So I was doing zfs upgrade two times and two times had a problem with next boot. The problem is that GRUB is not able to see new ZFS and so it brings GRUB prompt. You may try to enter GRUB entries by hands but this would not help you: GRUB> bootfs rpool/ROOT/opensolaris-2GRUB> kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS....Error 17: cannot mount selected partition The solution is to get DVD of your build and boot from it. And then installgrub: jack@opensolaris:~/Desktop$ su - Password: opensolaris Sun Microsystems Inc. SunOS 5.11 snv_98 November 2008 bash-3.2# zpool status no pools available bash-3.2# zpool import pool: rpool id: 1988807281359054800 state: ONLINE status: The pool was last accessed by another system. action: The pool can be imported using its name or numeric identifier and the '-f' flag. see: http://www.sun.com/msg/ZFS-8000-EY config: rpool ONLINE mirror ONLINE c5t0d0s0 ONLINE c3t0d0s0 ONLINE bash-3.2# zpool import -f rpool bash-3.2# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t0d0s0 Updating master boot sector destroys existing boot managers (if any). continue (y/n)? y stage1 written to partition 1 sector 0 (abs 96406065) stage2 written to partition 1, 267 sectors starting at 50 (abs 96406115) stage1 written to master boot sector bash-3.2# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t0d0s0 Updating master boot sector destroys existing boot managers (if any). continue (y/n)? y stage1 written to partition 0 sector 0 (abs 16065) stage2 written to partition 0, 267 sectors starting at 50 (abs 16115) stage1 written to master boot sector bash-3.2# zpool export rpool bash-3.2# init 6

Need to say that I have OpenSolaris on my Sony Vaio laptop. And I am following newest releases of OpenSolaris: 86, 96, 97 and now 98. In between builds 86 and 98 ZFS pool version was increased twice...

Oracle

Integrated Cloud Applications & Platform Services