X

Recent Posts

Fusion Middleware

Building an FMW Cluster using Docker (Part III Running Docker Containers)

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems@importurl('https://themes.googleusercontent.com/fonts/css?kit=soJ-z33zIWP9ip4SlSLqmawpPKmPqgwAG3potzd-b6hLxa5kJlTvGyssufwSrkOWWWXuzMD-CMEVHDC2H45HXw');.lst-kix_sholqrhc62dh-7>li:before {content: "\0025cb "}.lst-kix_sholqrhc62dh-8>li:before {content: "\0025a0 "}.lst-kix_sholqrhc62dh-6>li:before {content: "\0025cf "}ol.lst-kix_vf0l197cqv6l-2.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-2 0}ol.lst-kix_xv318blpjdo-3.start {counter-reset: lst-ctn-kix_xv318blpjdo-3 0}.lst-kix_sholqrhc62dh-1>li:before {content: "\0025cb "}.lst-kix_vf0l197cqv6l-2>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-2, lower-roman) ". "}.lst-kix_sholqrhc62dh-0>li:before {content: "\0025cf "}.lst-kix_emhp84jkv42c-3>li {counter-increment: lst-ctn-kix_emhp84jkv42c-3}.lst-kix_ekmayt81kvbz-0>li:before {content: "\0025cf "}.lst-kix_vf0l197cqv6l-1>li:before {content: "\0025cb "}.lst-kix_jj5w63toozfm-4>li {counter-increment: lst-ctn-kix_jj5w63toozfm-4}.lst-kix_ekmayt81kvbz-2>li:before {content: "\0025a0 "}ul.lst-kix_qeqyxe7gm97l-6 {list-style-type: none}ul.lst-kix_qeqyxe7gm97l-5 {list-style-type: none}.lst-kix_ekmayt81kvbz-1>li:before {content: "\0025cb "}ul.lst-kix_qeqyxe7gm97l-4 {list-style-type: none}.lst-kix_ekmayt81kvbz-3>li:before {content: "\0025cf "}ul.lst-kix_qeqyxe7gm97l-3 {list-style-type: none}.lst-kix_vf0l197cqv6l-0>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-0, decimal) ". "}ul.lst-kix_qeqyxe7gm97l-2 {list-style-type: none}ul.lst-kix_qeqyxe7gm97l-1 {list-style-type: none}ul.lst-kix_qeqyxe7gm97l-0 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-8.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-8 0}.lst-kix_ekmayt81kvbz-5>li:before {content: "\0025a0 "}.lst-kix_ekmayt81kvbz-7>li:before {content: "\0025cb "}ul.lst-kix_4m04az9jmmj8-3 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-2 {list-style-type: none}ul.lst-kix_qeqyxe7gm97l-8 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-1 {list-style-type: none}.lst-kix_ekmayt81kvbz-4>li:before {content: "\0025cb "}.lst-kix_ekmayt81kvbz-8>li:before {content: "\0025a0 "}ul.lst-kix_qeqyxe7gm97l-7 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-0 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-7 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-6 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-5 {list-style-type: none}ul.lst-kix_4m04az9jmmj8-4 {list-style-type: none}.lst-kix_ekmayt81kvbz-6>li:before {content: "\0025cf "}ul.lst-kix_4m04az9jmmj8-8 {list-style-type: none}.lst-kix_s3mi7ukxwiwf-2>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-2}.lst-kix_11a9ub9xa97v-0>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-0}ol.lst-kix_emhp84jkv42c-4.start {counter-reset: lst-ctn-kix_emhp84jkv42c-4 0}ol.lst-kix_xv318blpjdo-8.start {counter-reset: lst-ctn-kix_xv318blpjdo-8 0}ul.lst-kix_6ril5iwt0fcl-5 {list-style-type: none}.lst-kix_q8ok0mh9yyto-8>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-8}ul.lst-kix_6ril5iwt0fcl-4 {list-style-type: none}ul.lst-kix_6ril5iwt0fcl-3 {list-style-type: none}ul.lst-kix_6ril5iwt0fcl-2 {list-style-type: none}ul.lst-kix_6ril5iwt0fcl-8 {list-style-type: none}ul.lst-kix_6ril5iwt0fcl-7 {list-style-type: none}ul.lst-kix_6ril5iwt0fcl-6 {list-style-type: none}.lst-kix_vf0l197cqv6l-6>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-6}.lst-kix_vf0l197cqv6l-4>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-4, lower-latin) ". "}ol.lst-kix_s3mi7ukxwiwf-8.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-8 0}.lst-kix_vf0l197cqv6l-3>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-3, decimal) ". "}.lst-kix_vf0l197cqv6l-5>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-5, lower-roman) ". "}.lst-kix_vf0l197cqv6l-6>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-6, decimal) ". "}ul.lst-kix_6ril5iwt0fcl-1 {list-style-type: none}.lst-kix_vf0l197cqv6l-8>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-8, lower-roman) ". "}ul.lst-kix_6ril5iwt0fcl-0 {list-style-type: none}.lst-kix_xv318blpjdo-1>li {counter-increment: lst-ctn-kix_xv318blpjdo-1}.lst-kix_vf0l197cqv6l-7>li:before {content: "" counter(lst-ctn-kix_vf0l197cqv6l-7, lower-latin) ". "}.lst-kix_bfzyeb917dp8-6>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-6}ol.lst-kix_yly1729bcywk-0 {list-style-type: none}ol.lst-kix_yly1729bcywk-1 {list-style-type: none}ol.lst-kix_yly1729bcywk-6.start {counter-reset: lst-ctn-kix_yly1729bcywk-6 0}ol.lst-kix_yly1729bcywk-4 {list-style-type: none}ol.lst-kix_yly1729bcywk-5 {list-style-type: none}ol.lst-kix_yly1729bcywk-2 {list-style-type: none}ol.lst-kix_yly1729bcywk-3 {list-style-type: none}.lst-kix_s3mi7ukxwiwf-1>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-1, lower-latin) ". "}ol.lst-kix_yly1729bcywk-8 {list-style-type: none}ul.lst-kix_hchtl271h88l-3 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-0.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-0 0}ul.lst-kix_hchtl271h88l-2 {list-style-type: none}ol.lst-kix_yly1729bcywk-6 {list-style-type: none}ul.lst-kix_hchtl271h88l-5 {list-style-type: none}ol.lst-kix_yly1729bcywk-7 {list-style-type: none}ul.lst-kix_hchtl271h88l-4 {list-style-type: none}.lst-kix_gfromclascha-0>li:before {content: "\0025cf "}ul.lst-kix_hchtl271h88l-1 {list-style-type: none}ul.lst-kix_hchtl271h88l-0 {list-style-type: none}.lst-kix_gfromclascha-2>li:before {content: "\0025a0 "}.lst-kix_gfromclascha-4>li:before {content: "\0025cb "}ul.lst-kix_hchtl271h88l-7 {list-style-type: none}ul.lst-kix_hchtl271h88l-6 {list-style-type: none}ul.lst-kix_hchtl271h88l-8 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-7.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-7 0}.lst-kix_6ril5iwt0fcl-0>li:before {content: "\0025cf "}.lst-kix_gfromclascha-6>li:before {content: "\0025cf "}.lst-kix_q97rvvc7c69e-5>li:before {content: "\0025a0 "}.lst-kix_q97rvvc7c69e-7>li:before {content: "\0025cb "}ol.lst-kix_q8ok0mh9yyto-8.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-8 0}.lst-kix_6ril5iwt0fcl-2>li:before {content: "\0025a0 "}.lst-kix_q97rvvc7c69e-3>li:before {content: "\0025cf "}.lst-kix_s3mi7ukxwiwf-7>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-7, lower-latin) ". "}.lst-kix_6ril5iwt0fcl-4>li:before {content: "\0025cb "}.lst-kix_6ril5iwt0fcl-6>li:before {content: "\0025cf "}.lst-kix_xv318blpjdo-8>li {counter-increment: lst-ctn-kix_xv318blpjdo-8}.lst-kix_s3mi7ukxwiwf-5>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-5, lower-roman) ". "}.lst-kix_6ril5iwt0fcl-8>li:before {content: "\0025a0 "}.lst-kix_s3mi7ukxwiwf-3>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-3, decimal) ". "}.lst-kix_ne7nl4nhpzqr-0>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-0}.lst-kix_ne7nl4nhpzqr-7>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-7}.lst-kix_vf0l197cqv6l-5>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-5}ol.lst-kix_jj5w63toozfm-7.start {counter-reset: lst-ctn-kix_jj5w63toozfm-7 0}.lst-kix_1wulu3ra2vwv-4>li:before {content: "(" counter(lst-ctn-kix_1wulu3ra2vwv-4, decimal) ") "}.lst-kix_yly1729bcywk-3>li {counter-increment: lst-ctn-kix_yly1729bcywk-3}.lst-kix_hchtl271h88l-5>li:before {content: "\0025a0 "}.lst-kix_1wulu3ra2vwv-6>li:before {content: "(" counter(lst-ctn-kix_1wulu3ra2vwv-6, lower-roman) ") "}.lst-kix_yly1729bcywk-4>li {counter-increment: lst-ctn-kix_yly1729bcywk-4}.lst-kix_hchtl271h88l-7>li:before {content: "\0025cb "}.lst-kix_1wulu3ra2vwv-8>li:before {content: "(" counter(lst-ctn-kix_1wulu3ra2vwv-8, lower-roman) ") "}.lst-kix_gfromclascha-8>li:before {content: "\0025a0 "}.lst-kix_hchtl271h88l-3>li:before {content: "\0025cf "}.lst-kix_sholqrhc62dh-2>li:before {content: "\0025a0 "}ul.lst-kix_9jxnjym0nges-0 {list-style-type: none}.lst-kix_hchtl271h88l-1>li:before {content: "\0025cb "}.lst-kix_sholqrhc62dh-4>li:before {content: "\0025cb "}.lst-kix_1wulu3ra2vwv-2>li:before {content: "" counter(lst-ctn-kix_1wulu3ra2vwv-2, decimal) ". "}.lst-kix_1wulu3ra2vwv-0>li:before {content: "" counter(lst-ctn-kix_1wulu3ra2vwv-0, upper-roman) ". "}ol.lst-kix_11a9ub9xa97v-3.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-3 0}.lst-kix_emhp84jkv42c-2>li {counter-increment: lst-ctn-kix_emhp84jkv42c-2}ol.lst-kix_s3mi7ukxwiwf-4.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-4 0}ul.lst-kix_h0kibz3smj6t-8 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-7.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-7 0}.lst-kix_jkgkf1u9sy0c-2>li:before {content: "\0025a0 "}.lst-kix_f5kb4hocu5hh-2>li:before {content: "\0025a0 "}.lst-kix_jkgkf1u9sy0c-1>li:before {content: "\0025cb "}.lst-kix_f5kb4hocu5hh-6>li:before {content: "\0025cf "}ul.lst-kix_iv2x96orjh4l-2 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-1.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-1 0}ul.lst-kix_iv2x96orjh4l-3 {list-style-type: none}.lst-kix_f5kb4hocu5hh-5>li:before {content: "\0025a0 "}ul.lst-kix_iv2x96orjh4l-0 {list-style-type: none}.lst-kix_jj5w63toozfm-3>li {counter-increment: lst-ctn-kix_jj5w63toozfm-3}ul.lst-kix_iv2x96orjh4l-1 {list-style-type: none}ul.lst-kix_iv2x96orjh4l-6 {list-style-type: none}ul.lst-kix_iv2x96orjh4l-7 {list-style-type: none}.lst-kix_ne7nl4nhpzqr-6>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-6}ul.lst-kix_iv2x96orjh4l-4 {list-style-type: none}ul.lst-kix_iv2x96orjh4l-5 {list-style-type: none}ul.lst-kix_h0kibz3smj6t-3 {list-style-type: none}ul.lst-kix_h0kibz3smj6t-2 {list-style-type: none}ul.lst-kix_h0kibz3smj6t-1 {list-style-type: none}.lst-kix_jkgkf1u9sy0c-5>li:before {content: "\0025a0 "}ul.lst-kix_h0kibz3smj6t-0 {list-style-type: none}ol.lst-kix_jj5w63toozfm-2.start {counter-reset: lst-ctn-kix_jj5w63toozfm-2 0}ul.lst-kix_h0kibz3smj6t-7 {list-style-type: none}ul.lst-kix_h0kibz3smj6t-6 {list-style-type: none}ul.lst-kix_h0kibz3smj6t-5 {list-style-type: none}.lst-kix_jkgkf1u9sy0c-6>li:before {content: "\0025cf "}ul.lst-kix_h0kibz3smj6t-4 {list-style-type: none}.lst-kix_emhp84jkv42c-4>li {counter-increment: lst-ctn-kix_emhp84jkv42c-4}ul.lst-kix_iv2x96orjh4l-8 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-3.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-3 0}.lst-kix_b7256qmdgo85-3>li:before {content: "\0025cf "}.lst-kix_b7256qmdgo85-4>li:before {content: "\0025cb "}.lst-kix_b7256qmdgo85-7>li:before {content: "\0025cb "}ol.lst-kix_jj5w63toozfm-3.start {counter-reset: lst-ctn-kix_jj5w63toozfm-3 0}.lst-kix_q97rvvc7c69e-2>li:before {content: "\0025a0 "}.lst-kix_b7256qmdgo85-8>li:before {content: "\0025a0 "}ol.lst-kix_bfzyeb917dp8-2.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-2 0}ol.lst-kix_1wulu3ra2vwv-6.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-6 0}ol.lst-kix_yly1729bcywk-7.start {counter-reset: lst-ctn-kix_yly1729bcywk-7 0}.lst-kix_jj5w63toozfm-5>li {counter-increment: lst-ctn-kix_jj5w63toozfm-5}.lst-kix_s3mi7ukxwiwf-0>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-0, decimal) ". "}ul.lst-kix_q97rvvc7c69e-4 {list-style-type: none}ul.lst-kix_q97rvvc7c69e-3 {list-style-type: none}ul.lst-kix_q97rvvc7c69e-6 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-6.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-6 0}.lst-kix_s3mi7ukxwiwf-8>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-8, lower-roman) ". "}ul.lst-kix_q97rvvc7c69e-5 {list-style-type: none}ul.lst-kix_q97rvvc7c69e-8 {list-style-type: none}ul.lst-kix_q97rvvc7c69e-7 {list-style-type: none}.lst-kix_gfromclascha-3>li:before {content: "\0025cf "}ul.lst-kix_q97rvvc7c69e-0 {list-style-type: none}ul.lst-kix_q97rvvc7c69e-2 {list-style-type: none}ul.lst-kix_q97rvvc7c69e-1 {list-style-type: none}.lst-kix_yly1729bcywk-3>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-3, decimal) ". "}.lst-kix_q97rvvc7c69e-6>li:before {content: "\0025cf "}.lst-kix_6ril5iwt0fcl-1>li:before {content: "\0025cb "}.lst-kix_iv2x96orjh4l-6>li:before {content: "\0025cf "}.lst-kix_yly1729bcywk-7>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-7, lower-latin) ". "}ol.lst-kix_s3mi7ukxwiwf-3.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-3 0}.lst-kix_6ril5iwt0fcl-5>li:before {content: "\0025a0 "}.lst-kix_iv2x96orjh4l-2>li:before {content: "\0025a0 "}.lst-kix_q8ok0mh9yyto-1>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-1}.lst-kix_s3mi7ukxwiwf-4>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-4, lower-latin) ". "}.lst-kix_5anu1k9tsyak-2>li:before {content: "\0025a0 "}.lst-kix_b7256qmdgo85-0>li:before {content: "\0025cf "}ul.lst-kix_vf0l197cqv6l-1 {list-style-type: none}.lst-kix_1wulu3ra2vwv-3>li:before {content: "" counter(lst-ctn-kix_1wulu3ra2vwv-3, lower-latin) ") "}.lst-kix_xoos54gyybzj-5>li:before {content: "\0025a0 "}ol.lst-kix_ne7nl4nhpzqr-5.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-5 0}ol.lst-kix_vf0l197cqv6l-7.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-7 0}.lst-kix_11a9ub9xa97v-7>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-7}.lst-kix_1wulu3ra2vwv-7>li:before {content: "(" counter(lst-ctn-kix_1wulu3ra2vwv-7, lower-latin) ") "}.lst-kix_xoos54gyybzj-1>li:before {content: "\0025cb "}ol.lst-kix_ne7nl4nhpzqr-2 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-3 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-0 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-1 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-6 {list-style-type: none}.lst-kix_hchtl271h88l-8>li:before {content: "\0025a0 "}ol.lst-kix_ne7nl4nhpzqr-7 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-4 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-5 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-8 {list-style-type: none}.lst-kix_1wulu3ra2vwv-6>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-6}.lst-kix_gfromclascha-7>li:before {content: "\0025cb "}ol.lst-kix_ne7nl4nhpzqr-6.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-6 0}ul.lst-kix_pcjo479wrta-0 {list-style-type: none}.lst-kix_86hsx13ssqid-5>li:before {content: "\0025a0 "}ul.lst-kix_pcjo479wrta-2 {list-style-type: none}ul.lst-kix_pcjo479wrta-1 {list-style-type: none}ul.lst-kix_pcjo479wrta-4 {list-style-type: none}.lst-kix_hchtl271h88l-4>li:before {content: "\0025cb "}ul.lst-kix_pcjo479wrta-3 {list-style-type: none}ul.lst-kix_pcjo479wrta-6 {list-style-type: none}ul.lst-kix_pcjo479wrta-5 {list-style-type: none}ul.lst-kix_pcjo479wrta-8 {list-style-type: none}ul.lst-kix_pcjo479wrta-7 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-8.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-8 0}.lst-kix_hchtl271h88l-0>li:before {content: "\0025cf "}.lst-kix_86hsx13ssqid-1>li:before {content: "\0025cb "}.lst-kix_sholqrhc62dh-5>li:before {content: "\0025a0 "}.lst-kix_7tib3jrzu2u9-0>li:before {content: "\0025cf "}.lst-kix_7tib3jrzu2u9-5>li:before {content: "\0025a0 "}.lst-kix_7tib3jrzu2u9-2>li:before {content: "\0025a0 "}.lst-kix_7tib3jrzu2u9-3>li:before {content: "\0025cf "}ul.lst-kix_ndaonzmgp8vn-4 {list-style-type: none}ul.lst-kix_ndaonzmgp8vn-3 {list-style-type: none}ul.lst-kix_ndaonzmgp8vn-6 {list-style-type: none}ul.lst-kix_ndaonzmgp8vn-5 {list-style-type: none}ul.lst-kix_ndaonzmgp8vn-0 {list-style-type: none}ul.lst-kix_ndaonzmgp8vn-2 {list-style-type: none}.lst-kix_bfzyeb917dp8-4>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-4}ul.lst-kix_ndaonzmgp8vn-1 {list-style-type: none}.lst-kix_7tib3jrzu2u9-8>li:before {content: "\0025a0 "}ul.lst-kix_sholqrhc62dh-6 {list-style-type: none}ul.lst-kix_sholqrhc62dh-5 {list-style-type: none}ul.lst-kix_sholqrhc62dh-4 {list-style-type: none}ul.lst-kix_sholqrhc62dh-3 {list-style-type: none}ul.lst-kix_sholqrhc62dh-8 {list-style-type: none}ul.lst-kix_sholqrhc62dh-7 {list-style-type: none}ul.lst-kix_sholqrhc62dh-2 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-6.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-6 0}ul.lst-kix_sholqrhc62dh-1 {list-style-type: none}ul.lst-kix_sholqrhc62dh-0 {list-style-type: none}.lst-kix_emhp84jkv42c-1>li {counter-increment: lst-ctn-kix_emhp84jkv42c-1}ul.lst-kix_ndaonzmgp8vn-8 {list-style-type: none}ul.lst-kix_ndaonzmgp8vn-7 {list-style-type: none}.lst-kix_j42a5dwgnqyq-8>li:before {content: "\0025a0 "}.lst-kix_5anu1k9tsyak-6>li:before {content: "\0025cf "}ol.lst-kix_bfzyeb917dp8-1.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-1 0}.lst-kix_j42a5dwgnqyq-7>li:before {content: "\0025cb "}.lst-kix_qeqyxe7gm97l-7>li:before {content: "\0025cb "}.lst-kix_qeqyxe7gm97l-8>li:before {content: "\0025a0 "}.lst-kix_5anu1k9tsyak-8>li:before {content: "\0025a0 "}ol.lst-kix_s3mi7ukxwiwf-2.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-2 0}.lst-kix_yly1729bcywk-5>li {counter-increment: lst-ctn-kix_yly1729bcywk-5}ul.lst-kix_686a8e4qhxwx-4 {list-style-type: none}ul.lst-kix_686a8e4qhxwx-3 {list-style-type: none}ul.lst-kix_686a8e4qhxwx-2 {list-style-type: none}ul.lst-kix_686a8e4qhxwx-1 {list-style-type: none}ul.lst-kix_686a8e4qhxwx-8 {list-style-type: none}ul.lst-kix_686a8e4qhxwx-7 {list-style-type: none}ul.lst-kix_686a8e4qhxwx-6 {list-style-type: none}.lst-kix_6qnkx7t1adn9-0>li:before {content: "\0025cf "}.lst-kix_qeqyxe7gm97l-0>li:before {content: "\0025cf "}ul.lst-kix_686a8e4qhxwx-5 {list-style-type: none}.lst-kix_6qnkx7t1adn9-1>li:before {content: "\0025cb "}.lst-kix_6qnkx7t1adn9-3>li:before {content: "\0025cf "}ul.lst-kix_686a8e4qhxwx-0 {list-style-type: none}.lst-kix_bfzyeb917dp8-0>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-0}.lst-kix_qeqyxe7gm97l-5>li:before {content: "\0025a0 "}.lst-kix_j42a5dwgnqyq-0>li:before {content: "\0025cf "}.lst-kix_j42a5dwgnqyq-2>li:before {content: "\0025a0 "}.lst-kix_j42a5dwgnqyq-5>li:before {content: "\0025a0 "}ol.lst-kix_emhp84jkv42c-3.start {counter-reset: lst-ctn-kix_emhp84jkv42c-3 0}.lst-kix_qeqyxe7gm97l-2>li:before {content: "\0025a0 "}.lst-kix_emhp84jkv42c-8>li {counter-increment: lst-ctn-kix_emhp84jkv42c-8}ol.lst-kix_s3mi7ukxwiwf-0.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-0 0}.lst-kix_86hsx13ssqid-8>li:before {content: "\0025a0 "}ol.lst-kix_vf0l197cqv6l-6 {list-style-type: none}ul.lst-kix_xoos54gyybzj-4 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-7 {list-style-type: none}ul.lst-kix_xoos54gyybzj-5 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-3.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-3 0}ol.lst-kix_vf0l197cqv6l-4 {list-style-type: none}ul.lst-kix_xoos54gyybzj-6 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-5 {list-style-type: none}ul.lst-kix_xoos54gyybzj-7 {list-style-type: none}ul.lst-kix_xoos54gyybzj-8 {list-style-type: none}.lst-kix_s3mi7ukxwiwf-0>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-0}ol.lst-kix_vf0l197cqv6l-8 {list-style-type: none}.lst-kix_yly1729bcywk-2>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-2, lower-roman) ". "}.lst-kix_fd1rucpc9vz2-5>li:before {content: "\0025a0 "}.lst-kix_fd1rucpc9vz2-7>li:before {content: "\0025cb "}ol.lst-kix_vf0l197cqv6l-2 {list-style-type: none}.lst-kix_pcjo479wrta-5>li:before {content: "\0025a0 "}.lst-kix_iv2x96orjh4l-5>li:before {content: "\0025a0 "}ul.lst-kix_xoos54gyybzj-0 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-3 {list-style-type: none}ul.lst-kix_xoos54gyybzj-1 {list-style-type: none}ol.lst-kix_vf0l197cqv6l-0 {list-style-type: none}ul.lst-kix_xoos54gyybzj-2 {list-style-type: none}.lst-kix_yly1729bcywk-0>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-0, decimal) ". "}.lst-kix_yly1729bcywk-8>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-8, lower-roman) ". "}ul.lst-kix_xoos54gyybzj-3 {list-style-type: none}.lst-kix_iv2x96orjh4l-3>li:before {content: "\0025cf "}.lst-kix_5anu1k9tsyak-3>li:before {content: "\0025cf "}.lst-kix_5anu1k9tsyak-5>li:before {content: "\0025a0 "}.lst-kix_1wulu3ra2vwv-8>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-8}ol.lst-kix_q8ok0mh9yyto-6 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-5 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-4 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-3 {list-style-type: none}.lst-kix_9jxnjym0nges-0>li:before {content: "\0025cf "}ol.lst-kix_q8ok0mh9yyto-8 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-7 {list-style-type: none}.lst-kix_xoos54gyybzj-8>li:before {content: "\0025a0 "}ol.lst-kix_emhp84jkv42c-0.start {counter-reset: lst-ctn-kix_emhp84jkv42c-0 0}.lst-kix_opi66v2qdsjs-8>li:before {content: "\0025a0 "}.lst-kix_ndaonzmgp8vn-4>li:before {content: "\0025cb "}ol.lst-kix_q8ok0mh9yyto-2 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-1 {list-style-type: none}.lst-kix_9jxnjym0nges-2>li:before {content: "\0025a0 "}ol.lst-kix_q8ok0mh9yyto-0 {list-style-type: none}.lst-kix_xoos54gyybzj-6>li:before {content: "\0025cf "}.lst-kix_ndaonzmgp8vn-2>li:before {content: "\0025a0 "}.lst-kix_opi66v2qdsjs-2>li:before {content: "\0025a0 "}.lst-kix_11a9ub9xa97v-0>li:before {content: "" counter(lst-ctn-kix_11a9ub9xa97v-0, upper-roman) ". "}ol.lst-kix_bfzyeb917dp8-4.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-4 0}.lst-kix_q8ok0mh9yyto-3>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-3}.lst-kix_xoos54gyybzj-0>li:before {content: "\0025cf "}.lst-kix_11a9ub9xa97v-8>li:before {content: "(" counter(lst-ctn-kix_11a9ub9xa97v-8, lower-roman) ") "}.lst-kix_86hsx13ssqid-0>li:before {content: "\0025cf "}.lst-kix_86hsx13ssqid-6>li:before {content: "\0025cf "}.lst-kix_9jxnjym0nges-8>li:before {content: "\0025a0 "}.lst-kix_11a9ub9xa97v-6>li:before {content: "(" counter(lst-ctn-kix_11a9ub9xa97v-6, lower-roman) ") "}ul.lst-kix_9jxnjym0nges-2 {list-style-type: none}.lst-kix_xv318blpjdo-2>li:before {content: "" counter(lst-ctn-kix_xv318blpjdo-2, decimal) ". "}ul.lst-kix_9jxnjym0nges-1 {list-style-type: none}ul.lst-kix_9jxnjym0nges-4 {list-style-type: none}ul.lst-kix_9jxnjym0nges-3 {list-style-type: none}.lst-kix_h0kibz3smj6t-5>li:before {content: "\0025a0 "}ul.lst-kix_9jxnjym0nges-6 {list-style-type: none}ul.lst-kix_9jxnjym0nges-5 {list-style-type: none}ul.lst-kix_9jxnjym0nges-8 {list-style-type: none}ul.lst-kix_9jxnjym0nges-7 {list-style-type: none}.lst-kix_xv318blpjdo-2>li {counter-increment: lst-ctn-kix_xv318blpjdo-2}.lst-kix_f5kb4hocu5hh-3>li:before {content: "\0025cf "}.lst-kix_s3mi7ukxwiwf-3>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-3}.lst-kix_bfzyeb917dp8-7>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-7}.lst-kix_f5kb4hocu5hh-8>li:before {content: "\0025a0 "}ol.lst-kix_emhp84jkv42c-1.start {counter-reset: lst-ctn-kix_emhp84jkv42c-1 0}.lst-kix_jkgkf1u9sy0c-4>li:before {content: "\0025cb "}ol.lst-kix_jj5w63toozfm-3 {list-style-type: none}ol.lst-kix_jj5w63toozfm-4 {list-style-type: none}ol.lst-kix_jj5w63toozfm-5 {list-style-type: none}.lst-kix_jkgkf1u9sy0c-7>li:before {content: "\0025cb "}.lst-kix_xv318blpjdo-7>li:before {content: "(" counter(lst-ctn-kix_xv318blpjdo-7, lower-latin) ") "}.lst-kix_686a8e4qhxwx-5>li:before {content: "\0025a0 "}ol.lst-kix_jj5w63toozfm-6 {list-style-type: none}ol.lst-kix_jj5w63toozfm-7 {list-style-type: none}ol.lst-kix_jj5w63toozfm-8 {list-style-type: none}.lst-kix_h0kibz3smj6t-2>li:before {content: "\0025a0 "}.lst-kix_q8ok0mh9yyto-7>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-7}ol.lst-kix_jj5w63toozfm-0 {list-style-type: none}ol.lst-kix_jj5w63toozfm-1 {list-style-type: none}.lst-kix_686a8e4qhxwx-2>li:before {content: "\0025a0 "}ol.lst-kix_jj5w63toozfm-2 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-3 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-8.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-8 0}ol.lst-kix_11a9ub9xa97v-2 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-1 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-0 {list-style-type: none}.lst-kix_b7256qmdgo85-2>li:before {content: "\0025a0 "}ol.lst-kix_11a9ub9xa97v-8 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-7 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-6 {list-style-type: none}ol.lst-kix_11a9ub9xa97v-5 {list-style-type: none}.lst-kix_4m04az9jmmj8-7>li:before {content: "\0025cb "}ol.lst-kix_11a9ub9xa97v-4 {list-style-type: none}.lst-kix_b7256qmdgo85-5>li:before {content: "\0025a0 "}.lst-kix_q97rvvc7c69e-1>li:before {content: "\0025cb "}.lst-kix_yly1729bcywk-2>li {counter-increment: lst-ctn-kix_yly1729bcywk-2}.lst-kix_q8ok0mh9yyto-5>li:before {content: "(" counter(lst-ctn-kix_q8ok0mh9yyto-5, lower-latin) ") "}.lst-kix_4m04az9jmmj8-4>li:before {content: "\0025cb "}.lst-kix_bfzyeb917dp8-3>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-3}.lst-kix_ne7nl4nhpzqr-7>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-7, lower-latin) ". "}.lst-kix_q8ok0mh9yyto-8>li:before {content: "(" counter(lst-ctn-kix_q8ok0mh9yyto-8, lower-roman) ") "}ol.lst-kix_bfzyeb917dp8-5 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-4 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-3 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-2 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-8 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-7 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-6 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-1 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-0 {list-style-type: none}ol.lst-kix_emhp84jkv42c-5.start {counter-reset: lst-ctn-kix_emhp84jkv42c-5 0}.lst-kix_q8ok0mh9yyto-0>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-0}.lst-kix_l7z426mwssm0-3>li:before {content: "\0025cf "}.lst-kix_iv2x96orjh4l-8>li:before {content: "\0025a0 "}.lst-kix_6qnkx7t1adn9-6>li:before {content: "\0025cf "}.lst-kix_gfromclascha-5>li:before {content: "\0025a0 "}.lst-kix_pcjo479wrta-2>li:before {content: "\0025a0 "}ol.lst-kix_emhp84jkv42c-8.start {counter-reset: lst-ctn-kix_emhp84jkv42c-8 0}.lst-kix_11a9ub9xa97v-8>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-8}.lst-kix_x1epm4iu41dp-5>li:before {content: "\0025a0 "}.lst-kix_q97rvvc7c69e-4>li:before {content: "\0025cb "}.lst-kix_xv318blpjdo-5>li {counter-increment: lst-ctn-kix_xv318blpjdo-5}.lst-kix_1wulu3ra2vwv-5>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-5}.lst-kix_5anu1k9tsyak-0>li:before {content: "\0025cf "}.lst-kix_fy6y7gyjejoh-4>li:before {content: "\0025cb "}.lst-kix_ndaonzmgp8vn-7>li:before {content: "\0025cb "}.lst-kix_iv2x96orjh4l-0>li:before {content: "\0025cf "}.lst-kix_yly1729bcywk-5>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-5, lower-roman) ". "}.lst-kix_s3mi7ukxwiwf-2>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-2, lower-roman) ". "}.lst-kix_6ril5iwt0fcl-7>li:before {content: "\0025cb "}.lst-kix_emhp84jkv42c-4>li:before {content: "(" counter(lst-ctn-kix_emhp84jkv42c-4, decimal) ") "}.lst-kix_emhp84jkv42c-5>li {counter-increment: lst-ctn-kix_emhp84jkv42c-5}.lst-kix_jj5w63toozfm-4>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-4, lower-latin) ". "}.lst-kix_1wulu3ra2vwv-5>li:before {content: "(" counter(lst-ctn-kix_1wulu3ra2vwv-5, lower-latin) ") "}ul.lst-kix_u0uqs69v9qbh-8 {list-style-type: none}.lst-kix_xoos54gyybzj-3>li:before {content: "\0025cf "}.lst-kix_11a9ub9xa97v-1>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-1}ul.lst-kix_u0uqs69v9qbh-4 {list-style-type: none}.lst-kix_hchtl271h88l-6>li:before {content: "\0025cf "}ul.lst-kix_u0uqs69v9qbh-5 {list-style-type: none}.lst-kix_opi66v2qdsjs-5>li:before {content: "\0025a0 "}.lst-kix_lemcawe54w5c-5>li:before {content: "\0025a0 "}ul.lst-kix_u0uqs69v9qbh-6 {list-style-type: none}ul.lst-kix_u0uqs69v9qbh-7 {list-style-type: none}ul.lst-kix_u0uqs69v9qbh-0 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-6 {list-style-type: none}ul.lst-kix_u0uqs69v9qbh-1 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-5 {list-style-type: none}ul.lst-kix_u0uqs69v9qbh-2 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-8 {list-style-type: none}ul.lst-kix_u0uqs69v9qbh-3 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-7 {list-style-type: none}ol.lst-kix_emhp84jkv42c-7.start {counter-reset: lst-ctn-kix_emhp84jkv42c-7 0}.lst-kix_fd1rucpc9vz2-2>li:before {content: "\0025a0 "}ul.lst-kix_jkgkf1u9sy0c-4 {list-style-type: none}ul.lst-kix_jkgkf1u9sy0c-5 {list-style-type: none}ul.lst-kix_jkgkf1u9sy0c-6 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-0 {list-style-type: none}ul.lst-kix_jkgkf1u9sy0c-7 {list-style-type: none}.lst-kix_bfzyeb917dp8-2>li:before {content: "" counter(lst-ctn-kix_bfzyeb917dp8-2, decimal) ". "}.lst-kix_gyhqddkw9i05-1>li:before {content: "\0025cb "}ul.lst-kix_jkgkf1u9sy0c-8 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-2 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-1 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-4 {list-style-type: none}ul.lst-kix_gyhqddkw9i05-3 {list-style-type: none}.lst-kix_11a9ub9xa97v-3>li:before {content: "" counter(lst-ctn-kix_11a9ub9xa97v-3, lower-latin) ") "}ol.lst-kix_emhp84jkv42c-6.start {counter-reset: lst-ctn-kix_emhp84jkv42c-6 0}.lst-kix_8p26nc4xx5n8-1>li:before {content: "\0025cb "}.lst-kix_u0uqs69v9qbh-5>li:before {content: "\0025a0 "}.lst-kix_f5kb4hocu5hh-0>li:before {content: "\0025cf "}.lst-kix_9jxnjym0nges-5>li:before {content: "\0025a0 "}.lst-kix_86hsx13ssqid-3>li:before {content: "\0025cf "}ul.lst-kix_jkgkf1u9sy0c-0 {list-style-type: none}ul.lst-kix_jkgkf1u9sy0c-1 {list-style-type: none}ul.lst-kix_jkgkf1u9sy0c-2 {list-style-type: none}ul.lst-kix_jkgkf1u9sy0c-3 {list-style-type: none}.lst-kix_vf0l197cqv6l-2>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-2}ol.lst-kix_jj5w63toozfm-5.start {counter-reset: lst-ctn-kix_jj5w63toozfm-5 0}ul.lst-kix_bijol4nzhwf0-8 {list-style-type: none}ol.lst-kix_emhp84jkv42c-0 {list-style-type: none}ol.lst-kix_emhp84jkv42c-2 {list-style-type: none}ol.lst-kix_emhp84jkv42c-1 {list-style-type: none}ol.lst-kix_emhp84jkv42c-4 {list-style-type: none}ol.lst-kix_emhp84jkv42c-3 {list-style-type: none}ol.lst-kix_emhp84jkv42c-6 {list-style-type: none}ol.lst-kix_emhp84jkv42c-5 {list-style-type: none}ol.lst-kix_emhp84jkv42c-8 {list-style-type: none}.lst-kix_yly1729bcywk-7>li {counter-increment: lst-ctn-kix_yly1729bcywk-7}ol.lst-kix_emhp84jkv42c-7 {list-style-type: none}.lst-kix_11a9ub9xa97v-4>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-4}.lst-kix_1qz6dmm9b14l-4>li:before {content: "\0025cb "}.lst-kix_1qz6dmm9b14l-3>li:before {content: "\0025cf "}.lst-kix_1qz6dmm9b14l-5>li:before {content: "\0025a0 "}ul.lst-kix_j42a5dwgnqyq-0 {list-style-type: none}ul.lst-kix_j42a5dwgnqyq-1 {list-style-type: none}ul.lst-kix_j42a5dwgnqyq-2 {list-style-type: none}ul.lst-kix_j42a5dwgnqyq-3 {list-style-type: none}ul.lst-kix_j42a5dwgnqyq-4 {list-style-type: none}ul.lst-kix_j42a5dwgnqyq-5 {list-style-type: none}.lst-kix_1qz6dmm9b14l-0>li:before {content: "\0025cf "}.lst-kix_1qz6dmm9b14l-8>li:before {content: "\0025a0 "}ul.lst-kix_j42a5dwgnqyq-6 {list-style-type: none}ul.lst-kix_j42a5dwgnqyq-7 {list-style-type: none}.lst-kix_1qz6dmm9b14l-1>li:before {content: "\0025cb "}ol.lst-kix_bfzyeb917dp8-0.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-0 0}ul.lst-kix_j42a5dwgnqyq-8 {list-style-type: none}.lst-kix_1qz6dmm9b14l-2>li:before {content: "\0025a0 "}ul.lst-kix_bijol4nzhwf0-4 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-5 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-6 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-7 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-0 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-1 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-2 {list-style-type: none}ul.lst-kix_bijol4nzhwf0-3 {list-style-type: none}.lst-kix_jj5w63toozfm-3>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-3, decimal) ". "}.lst-kix_1wulu3ra2vwv-2>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-2}.lst-kix_1qz6dmm9b14l-7>li:before {content: "\0025cb "}.lst-kix_jj5w63toozfm-2>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-2, lower-roman) ". "}.lst-kix_ne7nl4nhpzqr-3>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-3}.lst-kix_q8ok0mh9yyto-4>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-4}.lst-kix_1qz6dmm9b14l-6>li:before {content: "\0025cf "}.lst-kix_jj5w63toozfm-1>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-1, lower-latin) ". "}.lst-kix_h9mjmxara98n-7>li:before {content: "\0025cb "}.lst-kix_bijol4nzhwf0-7>li:before {content: "\0025cb "}.lst-kix_jj5w63toozfm-0>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-0, decimal) ". "}.lst-kix_h9mjmxara98n-6>li:before {content: "\0025cf "}.lst-kix_h9mjmxara98n-8>li:before {content: "\0025a0 "}.lst-kix_bijol4nzhwf0-6>li:before {content: "\0025cf "}.lst-kix_bijol4nzhwf0-8>li:before {content: "\0025a0 "}.lst-kix_h9mjmxara98n-5>li:before {content: "\0025a0 "}.lst-kix_bijol4nzhwf0-5>li:before {content: "\0025a0 "}.lst-kix_h9mjmxara98n-3>li:before {content: "\0025cf "}.lst-kix_bijol4nzhwf0-3>li:before {content: "\0025cf "}.lst-kix_h9mjmxara98n-2>li:before {content: "\0025a0 "}.lst-kix_h9mjmxara98n-4>li:before {content: "\0025cb "}.lst-kix_bijol4nzhwf0-2>li:before {content: "\0025a0 "}.lst-kix_bijol4nzhwf0-4>li:before {content: "\0025cb "}ol.lst-kix_q8ok0mh9yyto-1.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-1 0}ol.lst-kix_s3mi7ukxwiwf-1.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-1 0}.lst-kix_h9mjmxara98n-0>li:before {content: "\0025cf "}.lst-kix_bijol4nzhwf0-0>li:before {content: "\0025cf "}.lst-kix_h9mjmxara98n-1>li:before {content: "\0025cb "}.lst-kix_bijol4nzhwf0-1>li:before {content: "\0025cb "}.lst-kix_jj5w63toozfm-8>li {counter-increment: lst-ctn-kix_jj5w63toozfm-8}ol.lst-kix_ne7nl4nhpzqr-3.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-3 0}ul.lst-kix_gfromclascha-0 {list-style-type: none}ol.lst-kix_emhp84jkv42c-2.start {counter-reset: lst-ctn-kix_emhp84jkv42c-2 0}.lst-kix_l7z426mwssm0-2>li:before {content: "\0025a0 "}.lst-kix_ne7nl4nhpzqr-4>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-4, lower-latin) ". "}.lst-kix_l7z426mwssm0-0>li:before {content: "\0025cf "}.lst-kix_l7z426mwssm0-4>li:before {content: "\0025cb "}ul.lst-kix_gfromclascha-7 {list-style-type: none}ul.lst-kix_gfromclascha-8 {list-style-type: none}ul.lst-kix_gfromclascha-5 {list-style-type: none}.lst-kix_ne7nl4nhpzqr-0>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-0, decimal) ". "}.lst-kix_ne7nl4nhpzqr-2>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-2, lower-roman) ". "}ul.lst-kix_gfromclascha-6 {list-style-type: none}.lst-kix_x1epm4iu41dp-8>li:before {content: "\0025a0 "}ul.lst-kix_gfromclascha-3 {list-style-type: none}ul.lst-kix_gfromclascha-4 {list-style-type: none}ul.lst-kix_gfromclascha-1 {list-style-type: none}.lst-kix_bfzyeb917dp8-3>li:before {content: "" counter(lst-ctn-kix_bfzyeb917dp8-3, lower-latin) ") "}ul.lst-kix_gfromclascha-2 {list-style-type: none}ul.lst-kix_opi66v2qdsjs-3 {list-style-type: none}.lst-kix_x1epm4iu41dp-2>li:before {content: "\0025a0 "}ol.lst-kix_11a9ub9xa97v-1.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-1 0}ul.lst-kix_opi66v2qdsjs-2 {list-style-type: none}ul.lst-kix_opi66v2qdsjs-1 {list-style-type: none}ul.lst-kix_opi66v2qdsjs-0 {list-style-type: none}.lst-kix_q8ok0mh9yyto-0>li:before {content: "" counter(lst-ctn-kix_q8ok0mh9yyto-0, upper-roman) ". "}.lst-kix_fy6y7gyjejoh-3>li:before {content: "\0025cf "}.lst-kix_bfzyeb917dp8-5>li:before {content: "(" counter(lst-ctn-kix_bfzyeb917dp8-5, lower-latin) ") "}.lst-kix_jj5w63toozfm-0>li {counter-increment: lst-ctn-kix_jj5w63toozfm-0}.lst-kix_x1epm4iu41dp-6>li:before {content: "\0025cf "}ul.lst-kix_opi66v2qdsjs-8 {list-style-type: none}ul.lst-kix_opi66v2qdsjs-7 {list-style-type: none}.lst-kix_x1epm4iu41dp-4>li:before {content: "\0025cb "}ul.lst-kix_opi66v2qdsjs-6 {list-style-type: none}ul.lst-kix_opi66v2qdsjs-5 {list-style-type: none}.lst-kix_fy6y7gyjejoh-1>li:before {content: "\0025cb "}ul.lst-kix_opi66v2qdsjs-4 {list-style-type: none}.lst-kix_bfzyeb917dp8-7>li:before {content: "(" counter(lst-ctn-kix_bfzyeb917dp8-7, lower-latin) ") "}.lst-kix_emhp84jkv42c-7>li {counter-increment: lst-ctn-kix_emhp84jkv42c-7}.lst-kix_fy6y7gyjejoh-7>li:before {content: "\0025cb "}ol.lst-kix_xv318blpjdo-1.start {counter-reset: lst-ctn-kix_xv318blpjdo-1 0}.lst-kix_x1epm4iu41dp-0>li:before {content: "\0025cf "}ol.lst-kix_vf0l197cqv6l-0.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-0 0}.lst-kix_fy6y7gyjejoh-5>li:before {content: "\0025a0 "}.lst-kix_emhp84jkv42c-3>li:before {content: "" counter(lst-ctn-kix_emhp84jkv42c-3, lower-latin) ") "}.lst-kix_emhp84jkv42c-5>li:before {content: "(" counter(lst-ctn-kix_emhp84jkv42c-5, lower-latin) ") "}.lst-kix_lemcawe54w5c-8>li:before {content: "\0025a0 "}ol.lst-kix_yly1729bcywk-4.start {counter-reset: lst-ctn-kix_yly1729bcywk-4 0}.lst-kix_emhp84jkv42c-1>li:before {content: "" counter(lst-ctn-kix_emhp84jkv42c-1, upper-latin) ". "}.lst-kix_emhp84jkv42c-7>li:before {content: "(" counter(lst-ctn-kix_emhp84jkv42c-7, lower-latin) ") "}.lst-kix_lemcawe54w5c-6>li:before {content: "\0025cf "}.lst-kix_jj5w63toozfm-5>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-5, lower-roman) ". "}.lst-kix_bfzyeb917dp8-2>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-2}ul.lst-kix_1qz6dmm9b14l-6 {list-style-type: none}.lst-kix_lemcawe54w5c-4>li:before {content: "\0025cb "}.lst-kix_gyhqddkw9i05-8>li:before {content: "\0025a0 "}.lst-kix_jj5w63toozfm-7>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-7, lower-latin) ". "}ul.lst-kix_1qz6dmm9b14l-5 {list-style-type: none}ul.lst-kix_1qz6dmm9b14l-8 {list-style-type: none}ul.lst-kix_1qz6dmm9b14l-7 {list-style-type: none}.lst-kix_jj5w63toozfm-1>li {counter-increment: lst-ctn-kix_jj5w63toozfm-1}ul.lst-kix_1qz6dmm9b14l-2 {list-style-type: none}ul.lst-kix_1qz6dmm9b14l-1 {list-style-type: none}ul.lst-kix_1qz6dmm9b14l-4 {list-style-type: none}ul.lst-kix_1qz6dmm9b14l-3 {list-style-type: none}.lst-kix_u0uqs69v9qbh-0>li:before {content: "\0025cf "}.lst-kix_u0uqs69v9qbh-4>li:before {content: "\0025cb "}ul.lst-kix_1qz6dmm9b14l-0 {list-style-type: none}.lst-kix_s3mi7ukxwiwf-6>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-6}.lst-kix_jj5w63toozfm-7>li {counter-increment: lst-ctn-kix_jj5w63toozfm-7}.lst-kix_lemcawe54w5c-2>li:before {content: "\0025a0 "}.lst-kix_u0uqs69v9qbh-2>li:before {content: "\0025a0 "}.lst-kix_8p26nc4xx5n8-6>li:before {content: "\0025cf "}ol.lst-kix_1wulu3ra2vwv-4.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-4 0}.lst-kix_l7z426mwssm0-8>li:before {content: "\0025a0 "}.lst-kix_u0uqs69v9qbh-8>li:before {content: "\0025a0 "}.lst-kix_bfzyeb917dp8-1>li:before {content: "" counter(lst-ctn-kix_bfzyeb917dp8-1, upper-latin) ". "}ul.lst-kix_ekmayt81kvbz-8 {list-style-type: none}.lst-kix_gyhqddkw9i05-0>li:before {content: "\0025cf "}ul.lst-kix_ekmayt81kvbz-7 {list-style-type: none}.lst-kix_gyhqddkw9i05-4>li:before {content: "\0025cb "}.lst-kix_l7z426mwssm0-6>li:before {content: "\0025cf "}.lst-kix_8p26nc4xx5n8-2>li:before {content: "\0025a0 "}ul.lst-kix_ekmayt81kvbz-6 {list-style-type: none}ul.lst-kix_ekmayt81kvbz-5 {list-style-type: none}.lst-kix_lemcawe54w5c-0>li:before {content: "\0025cf "}ul.lst-kix_ekmayt81kvbz-4 {list-style-type: none}ul.lst-kix_ekmayt81kvbz-3 {list-style-type: none}.lst-kix_gyhqddkw9i05-6>li:before {content: "\0025cf "}.lst-kix_8p26nc4xx5n8-0>li:before {content: "\0025cf "}.lst-kix_8p26nc4xx5n8-8>li:before {content: "\0025a0 "}ul.lst-kix_ekmayt81kvbz-2 {list-style-type: none}ul.lst-kix_ekmayt81kvbz-1 {list-style-type: none}.lst-kix_u0uqs69v9qbh-6>li:before {content: "\0025cf "}ul.lst-kix_ekmayt81kvbz-0 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-1 {list-style-type: none}ol.lst-kix_bfzyeb917dp8-5.start {counter-reset: lst-ctn-kix_bfzyeb917dp8-5 0}ol.lst-kix_s3mi7ukxwiwf-2 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-0 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-1 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-5 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-2 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-6 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-3 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-0 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-4 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-5 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-6 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-3 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-7 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-4 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-8 {list-style-type: none}.lst-kix_gyhqddkw9i05-2>li:before {content: "\0025a0 "}.lst-kix_8p26nc4xx5n8-4>li:before {content: "\0025cb "}ul.lst-kix_7tib3jrzu2u9-7 {list-style-type: none}ul.lst-kix_7tib3jrzu2u9-8 {list-style-type: none}ol.lst-kix_ne7nl4nhpzqr-7.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-7 0}.lst-kix_yly1729bcywk-6>li {counter-increment: lst-ctn-kix_yly1729bcywk-6}.lst-kix_xv318blpjdo-1>li:before {content: "" counter(lst-ctn-kix_xv318blpjdo-1, upper-latin) ". "}.lst-kix_xv318blpjdo-0>li:before {content: "" counter(lst-ctn-kix_xv318blpjdo-0, upper-roman) ". "}.lst-kix_h0kibz3smj6t-4>li:before {content: "\0025cb "}.lst-kix_ne7nl4nhpzqr-4>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-4}.lst-kix_1wulu3ra2vwv-1>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-1}.lst-kix_h0kibz3smj6t-7>li:before {content: "\0025cb "}.lst-kix_h0kibz3smj6t-8>li:before {content: "\0025a0 "}.lst-kix_11a9ub9xa97v-5>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-5}.lst-kix_opi66v2qdsjs-0>li:before {content: "\0025cf "}.lst-kix_686a8e4qhxwx-7>li:before {content: "\0025cb "}.lst-kix_xv318blpjdo-8>li:before {content: "(" counter(lst-ctn-kix_xv318blpjdo-8, lower-roman) ") "}.lst-kix_xv318blpjdo-5>li:before {content: "(" counter(lst-ctn-kix_xv318blpjdo-5, lower-latin) ") "}.lst-kix_h0kibz3smj6t-3>li:before {content: "\0025cf "}.lst-kix_686a8e4qhxwx-3>li:before {content: "\0025cf "}.lst-kix_686a8e4qhxwx-4>li:before {content: "\0025cb "}.lst-kix_xv318blpjdo-4>li:before {content: "(" counter(lst-ctn-kix_xv318blpjdo-4, decimal) ") "}.lst-kix_h0kibz3smj6t-0>li:before {content: "\0025cf "}ol.lst-kix_vf0l197cqv6l-4.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-4 0}ol.lst-kix_ne7nl4nhpzqr-2.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-2 0}ol.lst-kix_11a9ub9xa97v-0.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-0 0}.lst-kix_s3mi7ukxwiwf-7>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-7}ul.lst-kix_l7z426mwssm0-1 {list-style-type: none}ul.lst-kix_l7z426mwssm0-2 {list-style-type: none}ul.lst-kix_l7z426mwssm0-0 {list-style-type: none}ol.lst-kix_yly1729bcywk-8.start {counter-reset: lst-ctn-kix_yly1729bcywk-8 0}.lst-kix_ne7nl4nhpzqr-2>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-2}.lst-kix_q8ok0mh9yyto-3>li:before {content: "" counter(lst-ctn-kix_q8ok0mh9yyto-3, lower-latin) ") "}.lst-kix_pcjo479wrta-7>li:before {content: "\0025cb "}.lst-kix_686a8e4qhxwx-8>li:before {content: "\0025a0 "}.lst-kix_emhp84jkv42c-6>li {counter-increment: lst-ctn-kix_emhp84jkv42c-6}ul.lst-kix_l7z426mwssm0-7 {list-style-type: none}ul.lst-kix_l7z426mwssm0-8 {list-style-type: none}.lst-kix_q8ok0mh9yyto-2>li:before {content: "" counter(lst-ctn-kix_q8ok0mh9yyto-2, decimal) ". "}.lst-kix_pcjo479wrta-8>li:before {content: "\0025a0 "}ul.lst-kix_l7z426mwssm0-5 {list-style-type: none}ul.lst-kix_l7z426mwssm0-6 {list-style-type: none}ul.lst-kix_l7z426mwssm0-3 {list-style-type: none}ul.lst-kix_l7z426mwssm0-4 {list-style-type: none}.lst-kix_yly1729bcywk-8>li {counter-increment: lst-ctn-kix_yly1729bcywk-8}.lst-kix_vf0l197cqv6l-3>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-3}.lst-kix_q8ok0mh9yyto-6>li:before {content: "(" counter(lst-ctn-kix_q8ok0mh9yyto-6, lower-roman) ") "}.lst-kix_4m04az9jmmj8-2>li:before {content: "\0025a0 "}.lst-kix_4m04az9jmmj8-6>li:before {content: "\0025cf "}.lst-kix_q8ok0mh9yyto-7>li:before {content: "(" counter(lst-ctn-kix_q8ok0mh9yyto-7, lower-latin) ") "}.lst-kix_4m04az9jmmj8-1>li:before {content: "\0025cb "}.lst-kix_4m04az9jmmj8-5>li:before {content: "\0025a0 "}ol.lst-kix_vf0l197cqv6l-5.start {counter-reset: lst-ctn-kix_vf0l197cqv6l-5 0}.lst-kix_ne7nl4nhpzqr-8>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-8, lower-roman) ". "}.lst-kix_xv318blpjdo-4>li {counter-increment: lst-ctn-kix_xv318blpjdo-4}.lst-kix_vf0l197cqv6l-8>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-8}.lst-kix_ne7nl4nhpzqr-5>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-5, lower-roman) ". "}.lst-kix_l7z426mwssm0-1>li:before {content: "\0025cb "}.lst-kix_x1epm4iu41dp-7>li:before {content: "\0025cb "}.lst-kix_yly1729bcywk-1>li {counter-increment: lst-ctn-kix_yly1729bcywk-1}.lst-kix_6qnkx7t1adn9-4>li:before {content: "\0025cb "}.lst-kix_6qnkx7t1adn9-8>li:before {content: "\0025a0 "}.lst-kix_xv318blpjdo-6>li {counter-increment: lst-ctn-kix_xv318blpjdo-6}ol.lst-kix_jj5w63toozfm-1.start {counter-reset: lst-ctn-kix_jj5w63toozfm-1 0}.lst-kix_ne7nl4nhpzqr-1>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-1, lower-latin) ". "}.lst-kix_pcjo479wrta-4>li:before {content: "\0025cb "}ol.lst-kix_ne7nl4nhpzqr-8.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-8 0}.lst-kix_bfzyeb917dp8-4>li:before {content: "(" counter(lst-ctn-kix_bfzyeb917dp8-4, decimal) ") "}.lst-kix_x1epm4iu41dp-3>li:before {content: "\0025cf "}.lst-kix_fd1rucpc9vz2-8>li:before {content: "\0025a0 "}.lst-kix_bfzyeb917dp8-8>li:before {content: "(" counter(lst-ctn-kix_bfzyeb917dp8-8, lower-roman) ") "}.lst-kix_fy6y7gyjejoh-2>li:before {content: "\0025a0 "}.lst-kix_fy6y7gyjejoh-6>li:before {content: "\0025cf "}ol.lst-kix_1wulu3ra2vwv-8.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-8 0}.lst-kix_pcjo479wrta-0>li:before {content: "\0025cf "}ol.lst-kix_s3mi7ukxwiwf-6.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-6 0}ol.lst-kix_jj5w63toozfm-0.start {counter-reset: lst-ctn-kix_jj5w63toozfm-0 0}.lst-kix_emhp84jkv42c-2>li:before {content: "" counter(lst-ctn-kix_emhp84jkv42c-2, decimal) ". "}.lst-kix_emhp84jkv42c-6>li:before {content: "(" counter(lst-ctn-kix_emhp84jkv42c-6, lower-roman) ") "}.lst-kix_686a8e4qhxwx-0>li:before {content: "\0025cf "}.lst-kix_ndaonzmgp8vn-5>li:before {content: "\0025a0 "}.lst-kix_9jxnjym0nges-3>li:before {content: "\0025cf "}.lst-kix_opi66v2qdsjs-7>li:before {content: "\0025cb "}.lst-kix_lemcawe54w5c-7>li:before {content: "\0025cb "}.lst-kix_ndaonzmgp8vn-1>li:before {content: "\0025cb "}.lst-kix_gyhqddkw9i05-7>li:before {content: "\0025cb "}.lst-kix_jj5w63toozfm-6>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-6, decimal) ". "}.lst-kix_u0uqs69v9qbh-3>li:before {content: "\0025cf "}.lst-kix_fd1rucpc9vz2-0>li:before {content: "\0025cf "}.lst-kix_fd1rucpc9vz2-4>li:before {content: "\0025cb "}ol.lst-kix_s3mi7ukxwiwf-5.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-5 0}.lst-kix_opi66v2qdsjs-3>li:before {content: "\0025cf "}.lst-kix_lemcawe54w5c-3>li:before {content: "\0025cf "}.lst-kix_8p26nc4xx5n8-3>li:before {content: "\0025cf "}.lst-kix_8p26nc4xx5n8-7>li:before {content: "\0025cb "}.lst-kix_l7z426mwssm0-5>li:before {content: "\0025a0 "}.lst-kix_u0uqs69v9qbh-7>li:before {content: "\0025cb "}.lst-kix_bfzyeb917dp8-0>li:before {content: "" counter(lst-ctn-kix_bfzyeb917dp8-0, upper-roman) ". "}.lst-kix_11a9ub9xa97v-1>li:before {content: "" counter(lst-ctn-kix_11a9ub9xa97v-1, upper-latin) ". "}.lst-kix_gyhqddkw9i05-3>li:before {content: "\0025cf "}.lst-kix_9jxnjym0nges-7>li:before {content: "\0025cb "}.lst-kix_11a9ub9xa97v-5>li:before {content: "(" counter(lst-ctn-kix_11a9ub9xa97v-5, lower-latin) ") "}ol.lst-kix_11a9ub9xa97v-2.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-2 0}.lst-kix_s3mi7ukxwiwf-4>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-4}.lst-kix_7tib3jrzu2u9-4>li:before {content: "\0025cb "}.lst-kix_ne7nl4nhpzqr-5>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-5}ol.lst-kix_xv318blpjdo-4 {list-style-type: none}ol.lst-kix_s3mi7ukxwiwf-7.start {counter-reset: lst-ctn-kix_s3mi7ukxwiwf-7 0}ol.lst-kix_xv318blpjdo-5 {list-style-type: none}ol.lst-kix_xv318blpjdo-6 {list-style-type: none}ol.lst-kix_xv318blpjdo-7 {list-style-type: none}ol.lst-kix_xv318blpjdo-0 {list-style-type: none}ol.lst-kix_xv318blpjdo-1 {list-style-type: none}.lst-kix_7tib3jrzu2u9-1>li:before {content: "\0025cb "}ol.lst-kix_ne7nl4nhpzqr-4.start {counter-reset: lst-ctn-kix_ne7nl4nhpzqr-4 0}ol.lst-kix_xv318blpjdo-2 {list-style-type: none}ol.lst-kix_xv318blpjdo-3 {list-style-type: none}.lst-kix_q8ok0mh9yyto-6>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-6}ul.lst-kix_5anu1k9tsyak-7 {list-style-type: none}.lst-kix_jj5w63toozfm-6>li {counter-increment: lst-ctn-kix_jj5w63toozfm-6}ul.lst-kix_5anu1k9tsyak-8 {list-style-type: none}ul.lst-kix_5anu1k9tsyak-5 {list-style-type: none}ol.lst-kix_yly1729bcywk-5.start {counter-reset: lst-ctn-kix_yly1729bcywk-5 0}.lst-kix_11a9ub9xa97v-2>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-2}ol.lst-kix_1wulu3ra2vwv-0 {list-style-type: none}ul.lst-kix_5anu1k9tsyak-6 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-1 {list-style-type: none}ul.lst-kix_5anu1k9tsyak-3 {list-style-type: none}ul.lst-kix_5anu1k9tsyak-4 {list-style-type: none}ul.lst-kix_5anu1k9tsyak-1 {list-style-type: none}.lst-kix_xv318blpjdo-3>li {counter-increment: lst-ctn-kix_xv318blpjdo-3}ul.lst-kix_5anu1k9tsyak-2 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-6 {list-style-type: none}ul.lst-kix_5anu1k9tsyak-0 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-7 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-8 {list-style-type: none}.lst-kix_7tib3jrzu2u9-6>li:before {content: "\0025cf "}ol.lst-kix_xv318blpjdo-8 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-2 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-3 {list-style-type: none}.lst-kix_7tib3jrzu2u9-7>li:before {content: "\0025cb "}ol.lst-kix_1wulu3ra2vwv-4 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-5 {list-style-type: none}.lst-kix_vf0l197cqv6l-4>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-4}ol.lst-kix_1wulu3ra2vwv-5.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-5 0}ol.lst-kix_11a9ub9xa97v-7.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-7 0}ol.lst-kix_q8ok0mh9yyto-2.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-2 0}.lst-kix_11a9ub9xa97v-6>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-6}.lst-kix_5anu1k9tsyak-7>li:before {content: "\0025cb "}.lst-kix_j42a5dwgnqyq-6>li:before {content: "\0025cf "}ol.lst-kix_xv318blpjdo-2.start {counter-reset: lst-ctn-kix_xv318blpjdo-2 0}.lst-kix_1wulu3ra2vwv-4>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-4}.lst-kix_q8ok0mh9yyto-2>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-2}ol.lst-kix_jj5w63toozfm-4.start {counter-reset: lst-ctn-kix_jj5w63toozfm-4 0}.lst-kix_vf0l197cqv6l-0>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-0}ol.lst-kix_yly1729bcywk-0.start {counter-reset: lst-ctn-kix_yly1729bcywk-0 0}.lst-kix_6qnkx7t1adn9-2>li:before {content: "\0025a0 "}.lst-kix_qeqyxe7gm97l-6>li:before {content: "\0025cf "}.lst-kix_qeqyxe7gm97l-4>li:before {content: "\0025cb "}.lst-kix_j42a5dwgnqyq-1>li:before {content: "\0025cb "}.lst-kix_qeqyxe7gm97l-1>li:before {content: "\0025cb "}.lst-kix_qeqyxe7gm97l-3>li:before {content: "\0025cf "}.lst-kix_j42a5dwgnqyq-4>li:before {content: "\0025cb "}ol.lst-kix_1wulu3ra2vwv-0.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-0 0}ol.lst-kix_q8ok0mh9yyto-7.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-7 0}.lst-kix_j42a5dwgnqyq-3>li:before {content: "\0025cf "}.lst-kix_jj5w63toozfm-2>li {counter-increment: lst-ctn-kix_jj5w63toozfm-2}ol.lst-kix_1wulu3ra2vwv-3.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-3 0}ol.lst-kix_q8ok0mh9yyto-0.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-0 0}.lst-kix_6qnkx7t1adn9-5>li:before {content: "\0025a0 "}.lst-kix_6qnkx7t1adn9-7>li:before {content: "\0025cb "}ol.lst-kix_yly1729bcywk-3.start {counter-reset: lst-ctn-kix_yly1729bcywk-3 0}.lst-kix_iv2x96orjh4l-7>li:before {content: "\0025cb "}.lst-kix_pcjo479wrta-3>li:before {content: "\0025cf "}.lst-kix_yly1729bcywk-6>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-6, decimal) ". "}.lst-kix_ndaonzmgp8vn-6>li:before {content: "\0025cf "}.lst-kix_5anu1k9tsyak-1>li:before {content: "\0025cb "}.lst-kix_ndaonzmgp8vn-8>li:before {content: "\0025a0 "}.lst-kix_pcjo479wrta-1>li:before {content: "\0025cb "}.lst-kix_iv2x96orjh4l-1>li:before {content: "\0025cb "}.lst-kix_yly1729bcywk-4>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-4, lower-latin) ". "}.lst-kix_bfzyeb917dp8-8>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-8}.lst-kix_ne7nl4nhpzqr-1>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-1}.lst-kix_opi66v2qdsjs-4>li:before {content: "\0025cb "}.lst-kix_fd1rucpc9vz2-1>li:before {content: "\0025cb "}.lst-kix_fd1rucpc9vz2-3>li:before {content: "\0025cf "}.lst-kix_opi66v2qdsjs-6>li:before {content: "\0025cf "}ol.lst-kix_11a9ub9xa97v-4.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-4 0}.lst-kix_xoos54gyybzj-4>li:before {content: "\0025cb "}.lst-kix_bfzyeb917dp8-1>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-1}.lst-kix_ndaonzmgp8vn-0>li:before {content: "\0025cf "}ol.lst-kix_jj5w63toozfm-6.start {counter-reset: lst-ctn-kix_jj5w63toozfm-6 0}.lst-kix_xoos54gyybzj-2>li:before {content: "\0025a0 "}ul.lst-kix_b7256qmdgo85-6 {list-style-type: none}ul.lst-kix_b7256qmdgo85-7 {list-style-type: none}.lst-kix_1wulu3ra2vwv-3>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-3}ul.lst-kix_b7256qmdgo85-8 {list-style-type: none}ul.lst-kix_b7256qmdgo85-2 {list-style-type: none}ul.lst-kix_b7256qmdgo85-3 {list-style-type: none}.lst-kix_11a9ub9xa97v-4>li:before {content: "(" counter(lst-ctn-kix_11a9ub9xa97v-4, decimal) ") "}ul.lst-kix_b7256qmdgo85-4 {list-style-type: none}ul.lst-kix_b7256qmdgo85-5 {list-style-type: none}.lst-kix_f5kb4hocu5hh-1>li:before {content: "\0025cb "}.lst-kix_11a9ub9xa97v-2>li:before {content: "" counter(lst-ctn-kix_11a9ub9xa97v-2, decimal) ". "}.lst-kix_9jxnjym0nges-4>li:before {content: "\0025cb "}.lst-kix_86hsx13ssqid-4>li:before {content: "\0025cb "}ul.lst-kix_b7256qmdgo85-0 {list-style-type: none}ul.lst-kix_b7256qmdgo85-1 {list-style-type: none}.lst-kix_9jxnjym0nges-6>li:before {content: "\0025cf "}.lst-kix_86hsx13ssqid-2>li:before {content: "\0025a0 "}.lst-kix_bfzyeb917dp8-5>li {counter-increment: lst-ctn-kix_bfzyeb917dp8-5}.lst-kix_h0kibz3smj6t-6>li:before {content: "\0025cf "}.lst-kix_11a9ub9xa97v-3>li {counter-increment: lst-ctn-kix_11a9ub9xa97v-3}ol.lst-kix_11a9ub9xa97v-5.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-5 0}ul.lst-kix_x1epm4iu41dp-0 {list-style-type: none}.lst-kix_f5kb4hocu5hh-4>li:before {content: "\0025cb "}ol.lst-kix_xv318blpjdo-0.start {counter-reset: lst-ctn-kix_xv318blpjdo-0 0}.lst-kix_jkgkf1u9sy0c-0>li:before {content: "\0025cf "}.lst-kix_f5kb4hocu5hh-7>li:before {content: "\0025cb "}ul.lst-kix_x1epm4iu41dp-6 {list-style-type: none}ul.lst-kix_x1epm4iu41dp-5 {list-style-type: none}ul.lst-kix_x1epm4iu41dp-8 {list-style-type: none}.lst-kix_jkgkf1u9sy0c-3>li:before {content: "\0025cf "}ul.lst-kix_x1epm4iu41dp-7 {list-style-type: none}ul.lst-kix_x1epm4iu41dp-2 {list-style-type: none}.lst-kix_xv318blpjdo-0>li {counter-increment: lst-ctn-kix_xv318blpjdo-0}ol.lst-kix_jj5w63toozfm-8.start {counter-reset: lst-ctn-kix_jj5w63toozfm-8 0}ul.lst-kix_x1epm4iu41dp-1 {list-style-type: none}ul.lst-kix_x1epm4iu41dp-4 {list-style-type: none}ul.lst-kix_x1epm4iu41dp-3 {list-style-type: none}ol.lst-kix_1wulu3ra2vwv-2.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-2 0}.lst-kix_686a8e4qhxwx-6>li:before {content: "\0025cf "}.lst-kix_s3mi7ukxwiwf-5>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-5}.lst-kix_h0kibz3smj6t-1>li:before {content: "\0025cb "}.lst-kix_xv318blpjdo-6>li:before {content: "(" counter(lst-ctn-kix_xv318blpjdo-6, lower-roman) ") "}ul.lst-kix_86hsx13ssqid-3 {list-style-type: none}ul.lst-kix_86hsx13ssqid-2 {list-style-type: none}.lst-kix_xv318blpjdo-3>li:before {content: "" counter(lst-ctn-kix_xv318blpjdo-3, lower-latin) ") "}ul.lst-kix_86hsx13ssqid-5 {list-style-type: none}ul.lst-kix_86hsx13ssqid-4 {list-style-type: none}.lst-kix_686a8e4qhxwx-1>li:before {content: "\0025cb "}ol.lst-kix_yly1729bcywk-2.start {counter-reset: lst-ctn-kix_yly1729bcywk-2 0}ul.lst-kix_86hsx13ssqid-1 {list-style-type: none}.lst-kix_jkgkf1u9sy0c-8>li:before {content: "\0025a0 "}ul.lst-kix_86hsx13ssqid-0 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-0 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-1 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-2 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-3 {list-style-type: none}.lst-kix_b7256qmdgo85-1>li:before {content: "\0025cb "}ul.lst-kix_86hsx13ssqid-7 {list-style-type: none}ul.lst-kix_h9mjmxara98n-0 {list-style-type: none}ul.lst-kix_86hsx13ssqid-6 {list-style-type: none}ul.lst-kix_86hsx13ssqid-8 {list-style-type: none}ul.lst-kix_h9mjmxara98n-3 {list-style-type: none}.lst-kix_4m04az9jmmj8-8>li:before {content: "\0025a0 "}ul.lst-kix_fy6y7gyjejoh-8 {list-style-type: none}ul.lst-kix_h9mjmxara98n-4 {list-style-type: none}ul.lst-kix_h9mjmxara98n-1 {list-style-type: none}ul.lst-kix_h9mjmxara98n-2 {list-style-type: none}ul.lst-kix_h9mjmxara98n-7 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-4 {list-style-type: none}ul.lst-kix_h9mjmxara98n-8 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-5 {list-style-type: none}ul.lst-kix_h9mjmxara98n-5 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-6 {list-style-type: none}ul.lst-kix_h9mjmxara98n-6 {list-style-type: none}ul.lst-kix_fy6y7gyjejoh-7 {list-style-type: none}.lst-kix_b7256qmdgo85-6>li:before {content: "\0025cf "}ol.lst-kix_yly1729bcywk-1.start {counter-reset: lst-ctn-kix_yly1729bcywk-1 0}ol.lst-kix_1wulu3ra2vwv-1.start {counter-reset: lst-ctn-kix_1wulu3ra2vwv-1 0}.lst-kix_ne7nl4nhpzqr-8>li {counter-increment: lst-ctn-kix_ne7nl4nhpzqr-8}.lst-kix_q97rvvc7c69e-0>li:before {content: "\0025cf "}.lst-kix_q8ok0mh9yyto-1>li:before {content: "" counter(lst-ctn-kix_q8ok0mh9yyto-1, upper-latin) ". "}.lst-kix_4m04az9jmmj8-0>li:before {content: "\0025cf "}ol.lst-kix_11a9ub9xa97v-6.start {counter-reset: lst-ctn-kix_11a9ub9xa97v-6 0}.lst-kix_q8ok0mh9yyto-4>li:before {content: "(" counter(lst-ctn-kix_q8ok0mh9yyto-4, decimal) ") "}.lst-kix_emhp84jkv42c-0>li {counter-increment: lst-ctn-kix_emhp84jkv42c-0}.lst-kix_q8ok0mh9yyto-5>li {counter-increment: lst-ctn-kix_q8ok0mh9yyto-5}ul.lst-kix_lemcawe54w5c-7 {list-style-type: none}ul.lst-kix_lemcawe54w5c-8 {list-style-type: none}.lst-kix_ne7nl4nhpzqr-6>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-6, decimal) ". "}ul.lst-kix_lemcawe54w5c-5 {list-style-type: none}ul.lst-kix_lemcawe54w5c-6 {list-style-type: none}ul.lst-kix_lemcawe54w5c-3 {list-style-type: none}ul.lst-kix_lemcawe54w5c-4 {list-style-type: none}ul.lst-kix_lemcawe54w5c-1 {list-style-type: none}ul.lst-kix_lemcawe54w5c-2 {list-style-type: none}.lst-kix_s3mi7ukxwiwf-1>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-1}ul.lst-kix_lemcawe54w5c-0 {list-style-type: none}.lst-kix_4m04az9jmmj8-3>li:before {content: "\0025cf "}ul.lst-kix_f5kb4hocu5hh-4 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-3 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-6 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-5 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-8 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-7 {list-style-type: none}.lst-kix_xv318blpjdo-7>li {counter-increment: lst-ctn-kix_xv318blpjdo-7}ol.lst-kix_xv318blpjdo-7.start {counter-reset: lst-ctn-kix_xv318blpjdo-7 0}.lst-kix_gfromclascha-1>li:before {content: "\0025cb "}.lst-kix_86hsx13ssqid-7>li:before {content: "\0025cb "}.lst-kix_yly1729bcywk-0>li {counter-increment: lst-ctn-kix_yly1729bcywk-0}.lst-kix_ne7nl4nhpzqr-3>li:before {content: "" counter(lst-ctn-kix_ne7nl4nhpzqr-3, decimal) ". "}ol.lst-kix_q8ok0mh9yyto-3.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-3 0}.lst-kix_1wulu3ra2vwv-7>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-7}ul.lst-kix_f5kb4hocu5hh-0 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-2 {list-style-type: none}ul.lst-kix_f5kb4hocu5hh-1 {list-style-type: none}ul.lst-kix_fd1rucpc9vz2-0 {list-style-type: none}.lst-kix_fy6y7gyjejoh-0>li:before {content: "\0025cf "}ul.lst-kix_fd1rucpc9vz2-1 {list-style-type: none}ul.lst-kix_fd1rucpc9vz2-2 {list-style-type: none}.lst-kix_yly1729bcywk-1>li:before {content: "" counter(lst-ctn-kix_yly1729bcywk-1, lower-latin) ". "}ul.lst-kix_fd1rucpc9vz2-3 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-6.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-6 0}ul.lst-kix_fd1rucpc9vz2-8 {list-style-type: none}.lst-kix_pcjo479wrta-6>li:before {content: "\0025cf "}.lst-kix_bfzyeb917dp8-6>li:before {content: "(" counter(lst-ctn-kix_bfzyeb917dp8-6, lower-roman) ") "}.lst-kix_6ril5iwt0fcl-3>li:before {content: "\0025cf "}ul.lst-kix_fd1rucpc9vz2-4 {list-style-type: none}ul.lst-kix_fd1rucpc9vz2-5 {list-style-type: none}ul.lst-kix_fd1rucpc9vz2-6 {list-style-type: none}.lst-kix_fd1rucpc9vz2-6>li:before {content: "\0025cf "}ul.lst-kix_fd1rucpc9vz2-7 {list-style-type: none}.lst-kix_5anu1k9tsyak-4>li:before {content: "\0025cb "}.lst-kix_vf0l197cqv6l-7>li {counter-increment: lst-ctn-kix_vf0l197cqv6l-7}.lst-kix_s3mi7ukxwiwf-8>li {counter-increment: lst-ctn-kix_s3mi7ukxwiwf-8}.lst-kix_iv2x96orjh4l-4>li:before {content: "\0025cb "}.lst-kix_s3mi7ukxwiwf-6>li:before {content: "" counter(lst-ctn-kix_s3mi7ukxwiwf-6, decimal) ". "}.lst-kix_fy6y7gyjejoh-8>li:before {content: "\0025a0 "}ol.lst-kix_xv318blpjdo-4.start {counter-reset: lst-ctn-kix_xv318blpjdo-4 0}.lst-kix_x1epm4iu41dp-1>li:before {content: "\0025cb "}.lst-kix_q97rvvc7c69e-8>li:before {content: "\0025a0 "}ol.lst-kix_xv318blpjdo-5.start {counter-reset: lst-ctn-kix_xv318blpjdo-5 0}ul.lst-kix_8p26nc4xx5n8-0 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-1 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-2 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-3 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-4 {list-style-type: none}.lst-kix_emhp84jkv42c-0>li:before {content: "" counter(lst-ctn-kix_emhp84jkv42c-0, upper-roman) ". "}.lst-kix_emhp84jkv42c-8>li:before {content: "(" counter(lst-ctn-kix_emhp84jkv42c-8, lower-roman) ") "}.lst-kix_9jxnjym0nges-1>li:before {content: "\0025cb "}.lst-kix_xoos54gyybzj-7>li:before {content: "\0025cb "}ol.lst-kix_q8ok0mh9yyto-5.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-5 0}.lst-kix_ndaonzmgp8vn-3>li:before {content: "\0025cf "}.lst-kix_jj5w63toozfm-8>li:before {content: "" counter(lst-ctn-kix_jj5w63toozfm-8, lower-roman) ". "}.lst-kix_u0uqs69v9qbh-1>li:before {content: "\0025cb "}.lst-kix_opi66v2qdsjs-1>li:before {content: "\0025cb "}.lst-kix_lemcawe54w5c-1>li:before {content: "\0025cb "}.lst-kix_11a9ub9xa97v-7>li:before {content: "(" counter(lst-ctn-kix_11a9ub9xa97v-7, lower-latin) ") "}.lst-kix_8p26nc4xx5n8-5>li:before {content: "\0025a0 "}ol.lst-kix_xv318blpjdo-6.start {counter-reset: lst-ctn-kix_xv318blpjdo-6 0}.lst-kix_l7z426mwssm0-7>li:before {content: "\0025cb "}.lst-kix_1wulu3ra2vwv-0>li {counter-increment: lst-ctn-kix_1wulu3ra2vwv-0}.lst-kix_gyhqddkw9i05-5>li:before {content: "\0025a0 "}.lst-kix_hchtl271h88l-2>li:before {content: "\0025a0 "}ul.lst-kix_6qnkx7t1adn9-8 {list-style-type: none}ol.lst-kix_q8ok0mh9yyto-4.start {counter-reset: lst-ctn-kix_q8ok0mh9yyto-4 0}ul.lst-kix_6qnkx7t1adn9-7 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-5 {list-style-type: none}ul.lst-kix_6qnkx7t1adn9-6 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-6 {list-style-type: none}ul.lst-kix_6qnkx7t1adn9-5 {list-style-type: none}.lst-kix_sholqrhc62dh-3>li:before {content: "\0025cf "}ul.lst-kix_8p26nc4xx5n8-7 {list-style-type: none}ul.lst-kix_6qnkx7t1adn9-4 {list-style-type: none}ul.lst-kix_8p26nc4xx5n8-8 {list-style-type: none}ul.lst-kix_6qnkx7t1adn9-3 {list-style-type: none}ul.lst-kix_6qnkx7t1adn9-2 {list-style-type: none}ul.lst-kix_6qnkx7t1adn9-1 {list-style-type: none}.lst-kix_1wulu3ra2vwv-1>li:before {content: "" counter(lst-ctn-kix_1wulu3ra2vwv-1, upper-latin) ". "}ul.lst-kix_6qnkx7t1adn9-0 {list-style-type: none}ol {margin: 0;padding: 0}table td, table th {padding: 0}.c4 {color: #ff5e0e;font-weight: 700;text-decoration: none;vertical-align: baseline;font-size: 18pt;font-family: "PT Sans Narrow";font-style: normal}.c7 {color: #38761d;font-weight: 400;text-decoration: none;vertical-align: baseline;font-size: 16pt;font-family: "PT Sans Narrow";font-style: normal}.c14 {color: #695d46;font-weight: 700;text-decoration: none;vertical-align: baseline;font-size: 42pt;font-family: "PT Sans Narrow";font-style: normal}.c0 {color: #695d46;font-weight: 400;text-decoration: none;vertical-align: baseline;font-size: 11pt;font-family: "Open Sans";font-style: normal}.c1 {padding-top: 6pt;padding-bottom: 0pt;line-height: 1.2;orphans: 2;widows: 2;text-align: left}.c8 {padding-top: 18pt;padding-bottom: 6pt;line-height: 1.2;page-break-after: avoid;text-align: left}.c11 {padding-top: 24pt;padding-bottom: 0pt;line-height: 1.3;page-break-after: avoid;text-align: left}.c17 {padding-top: 6pt;padding-bottom: 0pt;line-height: 1.2;page-break-after: avoid;text-align: left}.c19 {text-decoration: none;vertical-align: baseline;font-size: 12pt;font-style: normal}.c9 {padding-top: 16pt;padding-bottom: 0pt;line-height: 1.0;text-align: left}.c18 {padding-top: 0pt;padding-bottom: 0pt;line-height: 1.2;text-align: left}.c10 {font-size: 18pt;font-family: "PT Sans Narrow";color: #ff5e0e;font-weight: 700}.c13 {font-family: "Open Sans";color: #695d46;font-weight: 400}.c12 {background-color: #ffffff;max-width: 468pt;padding: 72pt 72pt 72pt 72pt}.c5 {padding: 0;margin: 0}.c15 {color: inherit;text-decoration: inherit}.c16 {color: #1155cc;text-decoration: underline}.c3 {padding-left: 0pt}.c6 {margin-left: 72pt}.c2 {margin-left: 36pt}.title {padding-top: 0pt;color: #695d46;font-size: 26pt;padding-bottom: 3pt;font-family: "Open Sans";line-height: 1.2;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}.subtitle {padding-top: 0pt;color: #666666;font-size: 15pt;padding-bottom: 16pt;font-family: "Arial";line-height: 1.2;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}li {color: #695d46;font-size: 11pt;font-family: "Open Sans"}p {margin: 0;color: #695d46;font-size: 11pt;font-family: "Open Sans"}h1 {padding-top: 24pt;color: #ff5e0e;font-weight: 700;font-size: 18pt;padding-bottom: 0pt;font-family: "PT Sans Narrow";line-height: 1.3;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}h2 {padding-top: 18pt;color: #38761d;font-size: 16pt;padding-bottom: 6pt;font-family: "PT Sans Narrow";line-height: 1.2;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}h3 {padding-top: 16pt;color: #434343;font-size: 14pt;padding-bottom: 4pt;font-family: "Open Sans";line-height: 1.2;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}h4 {padding-top: 14pt;color: #666666;font-size: 12pt;padding-bottom: 4pt;font-family: "Open Sans";line-height: 1.2;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}h5 {padding-top: 12pt;color: #666666;font-size: 11pt;padding-bottom: 4pt;font-family: "Open Sans";line-height: 1.2;page-break-after: avoid;orphans: 2;widows: 2;text-align: left}h6 {padding-top: 12pt;color: #666666;font-size: 11pt;padding-bottom: 4pt;font-family: "Open Sans";line-height: 1.2;page-break-after: avoid;font-style: italic;orphans: 2;widows: 2;text-align: left} Oracle Fusion Middleware Deployments UsingDocker Swarm Part IIIOverviewThis is the third in a series of blogs thatdescribe how to build a Fusion Middleware (FMW) Cluster that runs asa number of Docker images that run in docker containers.  Thesecontainers are coordinated using Docker Swarm and can be deployed toa single host machine or multiple hosts.  This simplifies thetask of building FMW clusters and also makes it easier to scale themin and out (adding or subtracting host machines) as well as up anddown (using bigger or smaller host machines).  Using docker also helps us to avoid portconflicts when running multiple servers on the samephysical machine.  When we use swarm we will see that we alsoget benefits from a built in load balancer.This blog uses Oracle Service Bus as an FMWproduct but the principles are applicable to other FMW products.In our previous blog we talked about how tobuild the required docker images for running FMW on Docker Swarm andcreated a database container.In this entry we will explain how to createan FMW domain image and how to run that in a docker container.  The next blog will cover how to run this inDocker Swarm.Key Steps in Creating a Service Bus ClusterWhen creating a service bus cluster we need todo the following:Create the requiredschemas in a database.Service Bus 12.1adds a number of new features such as re-sequencing that require theuse of SOA Suite schemas.  These are in addition to thedatabase requirements for Web Services Security Manager that existedin Service Bus 11g.The RepositoryCreation Utility is used to create the schemas in a database.Create service busdomain.The service busdomain contains all the required service bus binaries and associatedconfiguration.  Within the domain we will create a service buscluster.The domain can becreated using the WebLogic scripting tool by applying the servicebus domain template.Create a Service Buscluster within the domain.The service buscluster allows us to have multiple service bus servers running thesame proxy services and sharing the load.The cluster can becreated and servers assigned using the WebLogic scripting tool.These steps need to be factored into the waywe build our docker images and containers and ultimately into how wecreate Docker Swarm services.Mapping the Service Bus Cluster onto DockerThere are a number of ways in which we couldmap a Service Bus cluster onto Docker.  We have chosen thefollowing approach:Create a Dockerimage that contains the Service Bus domain configuration.This is layered ontop of the OSB installation image.  This allows us to modifyand rebuild the scripts without having to reinstall the FMWsoftware.  This speeds up the development cycle of the image. Once the scripts are working they could be placed in the FMWbinary image, reducing the number of layers.When creating acontainer from the image we run the RCU to create the schemas in thedatabase.  We also run scripts to create the domain and addservers to the domain as needed.The same Dockerimage is used for both Admin and Managed Servers.Depending onparameters the container decides if it is an admin server or amember of a cluster.All servers needaccess to the database.All servers needaccess to the Admin server.The admin serverrequires access to all servers.Container SummaryWe effectively have two images, which are bothbuilt from multiple layers as explained previously.Database image holdsthe binaries for database and scripts to create a database instance.Fusion Middlewareimage holds the FMW binaries and scripts to create a domain, orextend an existing domain.We have a single Docker Container to run thedatabase from the database image.We have multiple Docker Containers, one per managed server, to run Fusion Middleware from the single domainimage.To simplify starting the containers the gitproject includes run scripts (run.sh for database and runNode.sh forFMW) that can be used to create containers.Database ContainerThe database container runs from the databaseimage and has the following characteristics:When the container is createdit creates and starts a database instance.After starting thedatabase we change the database password.When the containeris stopped it shuts down the database instance.When the containeris started it starts the database instance.The databasecontainer exposes the database port (1521 in this case)Only a singlecontainer runs a given database.The database container is started using thefollowing command:docker run -d -it --name Oracle12cDB--hostname osbdb -p 1521:1521 -p 5500:5500 -e ORACLE_SID=ORCL -eORACLE_PDB=OSB -v /opt/oracle/oradata/OracleDBoracle/database:12.1.0.2-eeWe expose ports 1521 (database) and 5500 (em).Admin Server ContainerThe admin server container runs from the FMWdomain image, or just the FMW image if the layers have beencollapsed.  It has the following characteristics:When the containeris created it runs the RCU to configure the database.When the container is createdit creates an FMW domain andcluster.When the containeris created it starts the Admin Server.When the containeris stopped its stops the Admin Server.When the containeris started it starts the Admin Server.The Admin Serverexposes the admin console port (7001 in this case)Only a single AdminServer container runs in a given domain.The same image isused for both Admin and Managed Servers.We start the admin server using the followingcommandrunNode.sh adminThis translates todocker run -d -it \        -e"MS_NAME=AdminServer" \        -e"MACHINE_NAME=AdminServerMachine" \        --namewlsadmin \        --add-hostosbdb:172.17.0.2 \        --add-hostwlsadmin:172.17.0.3 \        --add-hostwls1:172.17.0.4 \        --add-hostwls2:172.17.0.5 \        --add-hostwls3:172.17.0.6 \        --add-hostwls4:172.17.0.7 \        --hostnamewlsadmin \        -p7001:7001 \        oracle/osb_domain:12.2.1.2\        /u01/oracle/container-scripts/createAndStartOSBDomain.shWe need to add the hostnames of the managedservers and the database to the /etc/hosts file so that the adminserver can access them.  We will show how to avoid doing this inthe final blog post.Managed Server ContainersThe managed server containers runs from thesame FMW domain image as the Admin Server.  It has the followingcharacteristics:When the containeris created it creates a new server in the domain and adds it to theFMW clusterWhen the container is createdit creates a local copy of the domain files.When the containeris created it starts the Managed Server.When the containeris stopped its stops the Managed Server.When the containeris started it starts the Managed Server.The Managed Serverexposes the admin console port (8011 in this case)Multiple ManagedServer containers may run in a given domain.The same image isused for both Admin and Managed Servers.We start the managed servers using thefollowing commandrunNode.sh NWhere N is the number of the managed server.When N=2 this  translates todocker run -d -it \        -e"MS_NAME=osb_server2" \        -e"MACHINE_NAME=OsbServer2Machine" \        --namewls2 \        --add-hostosbdb:172.17.0.2 \        --add-hostwlsadmin:172.17.0.3 \        --add-hostwls1:172.17.0.4 \        --add-hostwls2:172.17.0.5 \        --add-hostwls3:172.17.0.6 \        --add-hostwls4:172.17.0.7 \        --hostnamewlsadmin \        -p8013:8011 \        oracle/osb_domain:12.2.1.2\        /u01/oracle/container-scripts/createAndStartOSBDomain.shNote that all the managed servers listen onport 8011.  Because they each run in their own container theiris no conflict in their port numbers but we need to map them so thatthey can be accessed externally without conflicts.Special Notes for Service BusThe first managed server in a Service Buscluster is special because it runs singleton tasks related toreporting, collecting performance information from other nodes in thecluster and aggregating it and making it available to the console. Because of this we decided to always create a Service Busdomain with a pre-existing single Managed Server in the cluster withthe correct singleton targeting.Because this server already exists if acontainer detects it is supposed to run Managed Server 1 then it doesnot create the server or associate it with a cluster, it justsassigns it to a machine (see next section for details) and createsthe local managed server domain.Containers and FMW MappingEach container maps to a single WebLogicserver, either Admin Server or a Managed Server in a cluster.The Admin Server container is responsible forrunning the repository configuration utility, creating the domain andconfiguring it for that particular FMW product (in our case ServiceBus).The Managed Server containers are responsiblefor adding a new Managed Server to the cluster and creating a localmanaged server domain.Both Admin and Managed Server containers needto figure out key facts about themselves:Hostname - used toset the listen address for the server and also the address of themachineType - admin ormanaged server to decide what to do on creationServer identifier -managed servers need to make sure they have unique server names.Associated AdminServer - managed servers must contact the admin server to obtain andupdate domain configuration.Database Server -admin servers must know about the database to be able to run the RCUand create data sources required by the FMW product.Starting the FMW ClusterThe FMW cluster is started as follows:A database containeris created/startedAn admin server iscreated/startedOne or more managedservers are created/startedContainers are created using the “dockerrun” command.  We use the “-d” flag to runthem as daemon processes.  By default the CMD directive in thedockerfile is used choose the command or script to run on containerstartup.  We use this for the database container.  For theadmin and managed containers we identify which type of container theyare and pass in an appropriate script to the “docker run”command.Containers are started using the “dockerstart” command and the container uses the same command orscript as when it was created.  That means we must detect ifthis is a new container or a container being started after previouslybeing shutdown.  With the FMW containers we do this by lookingfor the existence of the domain directory, if it exists we havepreviously been started, if not this must be our first run.Tools such as docker compose and dockercompose simplify the task of deploying our multiple container FMWcluster and we will look at these in the next blog entry.  Oneof the benefits we will find with swarm is that it includes a loadbalancer.  The current multi-container approach would requireeither another container to run a load balancer or an external loadbalancer, we will see that swarm removes this need.Retrieving Docker Files for OSB ClusterWe are posting all the requiredfiles to go along with this blog on github.  Youare welcome to fork from this and improve it.  We cloned many ofthese files from the officialOracle docker github.  We removed unused versionsand added a simplified build.sh file to each product directory tomake it easy to see how we actually built our environment.  Weare still updating the files online and will shortly add details onbuilding the swarm services.SummaryIn this entry we have explained how to createa Fusion Middleware Domain and run it on dockercontainers.  In our next entry we willsimplify the deploymentof of our clusterby taking advatage of swarmmode by defining swarm services for the database, WebLogic AdminServer and WebLogic managed servers.

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px...

Fusion Middleware

Building an FMW Cluster using Docker (Part II Building Docker Images)

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems@import url('https://themes.googleusercontent.com/fonts/css?kit=cGvuclDC_Z1vE_cnVEU6AQ3B2cQio22zuQYmMduN3_mWtMd0Jq3ZhX9v3IQGKB3u249SeSIUngoluNcHmRb61Bz1ZjgTUtEb3dCu921NmgQ');.lst-kix_sholqrhc62dh-7>li:before{content:"\0025cb "}.lst-kix_sholqrhc62dh-8>li:before{content:"\0025a0 "}.lst-kix_sholqrhc62dh-6>li:before{content:"\0025cf "}ol.lst-kix_xv318blpjdo-3.start{counter-reset:lst-ctn-kix_xv318blpjdo-3 0}.lst-kix_sholqrhc62dh-1>li:before{content:"\0025cb "}.lst-kix_sholqrhc62dh-0>li:before{content:"\0025cf "}.lst-kix_emhp84jkv42c-3>li{counter-increment:lst-ctn-kix_emhp84jkv42c-3}.lst-kix_ekmayt81kvbz-0>li:before{content:"\0025cf "}.lst-kix_jj5w63toozfm-4>li{counter-increment:lst-ctn-kix_jj5w63toozfm-4}.lst-kix_ekmayt81kvbz-2>li:before{content:"\0025a0 "}ul.lst-kix_qeqyxe7gm97l-6{list-style-type:none}ul.lst-kix_qeqyxe7gm97l-5{list-style-type:none}.lst-kix_ekmayt81kvbz-1>li:before{content:"\0025cb "}ul.lst-kix_qeqyxe7gm97l-4{list-style-type:none}.lst-kix_ekmayt81kvbz-3>li:before{content:"\0025cf "}ul.lst-kix_qeqyxe7gm97l-3{list-style-type:none}ul.lst-kix_qeqyxe7gm97l-2{list-style-type:none}ul.lst-kix_qeqyxe7gm97l-1{list-style-type:none}ul.lst-kix_qeqyxe7gm97l-0{list-style-type:none}ol.lst-kix_11a9ub9xa97v-8.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-8 0}.lst-kix_ekmayt81kvbz-5>li:before{content:"\0025a0 "}.lst-kix_ekmayt81kvbz-7>li:before{content:"\0025cb "}ul.lst-kix_4m04az9jmmj8-3{list-style-type:none}ul.lst-kix_4m04az9jmmj8-2{list-style-type:none}ul.lst-kix_qeqyxe7gm97l-8{list-style-type:none}ul.lst-kix_4m04az9jmmj8-1{list-style-type:none}.lst-kix_ekmayt81kvbz-4>li:before{content:"\0025cb "}.lst-kix_ekmayt81kvbz-8>li:before{content:"\0025a0 "}ul.lst-kix_qeqyxe7gm97l-7{list-style-type:none}ul.lst-kix_4m04az9jmmj8-0{list-style-type:none}ul.lst-kix_4m04az9jmmj8-7{list-style-type:none}ul.lst-kix_4m04az9jmmj8-6{list-style-type:none}ul.lst-kix_4m04az9jmmj8-5{list-style-type:none}ul.lst-kix_4m04az9jmmj8-4{list-style-type:none}.lst-kix_ekmayt81kvbz-6>li:before{content:"\0025cf "}ul.lst-kix_4m04az9jmmj8-8{list-style-type:none}.lst-kix_s3mi7ukxwiwf-2>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-2}.lst-kix_11a9ub9xa97v-0>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-0}ol.lst-kix_emhp84jkv42c-4.start{counter-reset:lst-ctn-kix_emhp84jkv42c-4 0}ol.lst-kix_xv318blpjdo-8.start{counter-reset:lst-ctn-kix_xv318blpjdo-8 0}ul.lst-kix_6ril5iwt0fcl-5{list-style-type:none}.lst-kix_q8ok0mh9yyto-8>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-8}ul.lst-kix_6ril5iwt0fcl-4{list-style-type:none}ul.lst-kix_6ril5iwt0fcl-3{list-style-type:none}ul.lst-kix_6ril5iwt0fcl-2{list-style-type:none}ul.lst-kix_6ril5iwt0fcl-8{list-style-type:none}ul.lst-kix_6ril5iwt0fcl-7{list-style-type:none}ul.lst-kix_6ril5iwt0fcl-6{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-8.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-8 0}ul.lst-kix_6ril5iwt0fcl-1{list-style-type:none}ul.lst-kix_6ril5iwt0fcl-0{list-style-type:none}.lst-kix_xv318blpjdo-1>li{counter-increment:lst-ctn-kix_xv318blpjdo-1}.lst-kix_bfzyeb917dp8-6>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-6}.lst-kix_s3mi7ukxwiwf-1>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-1,lower-latin) ". "}.lst-kix_gfromclascha-0>li:before{content:"\0025cf "}.lst-kix_gfromclascha-2>li:before{content:"\0025a0 "}.lst-kix_gfromclascha-4>li:before{content:"\0025cb "}ol.lst-kix_bfzyeb917dp8-7.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-7 0}.lst-kix_6ril5iwt0fcl-0>li:before{content:"\0025cf "}.lst-kix_gfromclascha-6>li:before{content:"\0025cf "}.lst-kix_q97rvvc7c69e-5>li:before{content:"\0025a0 "}.lst-kix_q97rvvc7c69e-7>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-8.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-8 0}.lst-kix_6ril5iwt0fcl-2>li:before{content:"\0025a0 "}.lst-kix_q97rvvc7c69e-3>li:before{content:"\0025cf "}.lst-kix_s3mi7ukxwiwf-7>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-7,lower-latin) ". "}.lst-kix_6ril5iwt0fcl-4>li:before{content:"\0025cb "}.lst-kix_6ril5iwt0fcl-6>li:before{content:"\0025cf "}.lst-kix_xv318blpjdo-8>li{counter-increment:lst-ctn-kix_xv318blpjdo-8}.lst-kix_s3mi7ukxwiwf-5>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-5,lower-roman) ". "}.lst-kix_6ril5iwt0fcl-8>li:before{content:"\0025a0 "}.lst-kix_s3mi7ukxwiwf-3>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-3,decimal) ". "}ol.lst-kix_jj5w63toozfm-7.start{counter-reset:lst-ctn-kix_jj5w63toozfm-7 0}.lst-kix_1wulu3ra2vwv-4>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-4,decimal) ") "}.lst-kix_1wulu3ra2vwv-6>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-6,lower-roman) ") "}.lst-kix_1wulu3ra2vwv-8>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-8,lower-roman) ") "}.lst-kix_gfromclascha-8>li:before{content:"\0025a0 "}.lst-kix_sholqrhc62dh-2>li:before{content:"\0025a0 "}ul.lst-kix_9jxnjym0nges-0{list-style-type:none}.lst-kix_sholqrhc62dh-4>li:before{content:"\0025cb "}.lst-kix_1wulu3ra2vwv-2>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-2,decimal) ". "}.lst-kix_1wulu3ra2vwv-0>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-0,upper-roman) ". "}ol.lst-kix_11a9ub9xa97v-3.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-3 0}.lst-kix_emhp84jkv42c-2>li{counter-increment:lst-ctn-kix_emhp84jkv42c-2}ol.lst-kix_s3mi7ukxwiwf-4.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-4 0}ul.lst-kix_h0kibz3smj6t-8{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-7.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-7 0}.lst-kix_jkgkf1u9sy0c-2>li:before{content:"\0025a0 "}.lst-kix_jkgkf1u9sy0c-1>li:before{content:"\0025cb "}ul.lst-kix_iv2x96orjh4l-2{list-style-type:none}ul.lst-kix_iv2x96orjh4l-3{list-style-type:none}ul.lst-kix_iv2x96orjh4l-0{list-style-type:none}.lst-kix_jj5w63toozfm-3>li{counter-increment:lst-ctn-kix_jj5w63toozfm-3}ul.lst-kix_iv2x96orjh4l-1{list-style-type:none}ul.lst-kix_iv2x96orjh4l-6{list-style-type:none}ul.lst-kix_iv2x96orjh4l-7{list-style-type:none}ul.lst-kix_iv2x96orjh4l-4{list-style-type:none}ul.lst-kix_iv2x96orjh4l-5{list-style-type:none}ul.lst-kix_h0kibz3smj6t-3{list-style-type:none}ul.lst-kix_h0kibz3smj6t-2{list-style-type:none}ul.lst-kix_h0kibz3smj6t-1{list-style-type:none}.lst-kix_jkgkf1u9sy0c-5>li:before{content:"\0025a0 "}ul.lst-kix_h0kibz3smj6t-0{list-style-type:none}ol.lst-kix_jj5w63toozfm-2.start{counter-reset:lst-ctn-kix_jj5w63toozfm-2 0}ul.lst-kix_h0kibz3smj6t-7{list-style-type:none}ul.lst-kix_h0kibz3smj6t-6{list-style-type:none}ul.lst-kix_h0kibz3smj6t-5{list-style-type:none}.lst-kix_jkgkf1u9sy0c-6>li:before{content:"\0025cf "}ul.lst-kix_h0kibz3smj6t-4{list-style-type:none}.lst-kix_emhp84jkv42c-4>li{counter-increment:lst-ctn-kix_emhp84jkv42c-4}ul.lst-kix_iv2x96orjh4l-8{list-style-type:none}ol.lst-kix_bfzyeb917dp8-3.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-3 0}.lst-kix_b7256qmdgo85-3>li:before{content:"\0025cf "}.lst-kix_b7256qmdgo85-4>li:before{content:"\0025cb "}.lst-kix_b7256qmdgo85-7>li:before{content:"\0025cb "}ol.lst-kix_jj5w63toozfm-3.start{counter-reset:lst-ctn-kix_jj5w63toozfm-3 0}.lst-kix_q97rvvc7c69e-2>li:before{content:"\0025a0 "}.lst-kix_b7256qmdgo85-8>li:before{content:"\0025a0 "}ol.lst-kix_bfzyeb917dp8-2.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-2 0}ol.lst-kix_1wulu3ra2vwv-6.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-6 0}.lst-kix_jj5w63toozfm-5>li{counter-increment:lst-ctn-kix_jj5w63toozfm-5}.lst-kix_s3mi7ukxwiwf-0>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-0,decimal) ". "}ul.lst-kix_q97rvvc7c69e-4{list-style-type:none}ul.lst-kix_q97rvvc7c69e-3{list-style-type:none}ul.lst-kix_q97rvvc7c69e-6{list-style-type:none}.lst-kix_s3mi7ukxwiwf-8>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-8,lower-roman) ". "}ul.lst-kix_q97rvvc7c69e-5{list-style-type:none}ul.lst-kix_q97rvvc7c69e-8{list-style-type:none}ul.lst-kix_q97rvvc7c69e-7{list-style-type:none}.lst-kix_gfromclascha-3>li:before{content:"\0025cf "}ul.lst-kix_q97rvvc7c69e-0{list-style-type:none}ul.lst-kix_q97rvvc7c69e-2{list-style-type:none}ul.lst-kix_q97rvvc7c69e-1{list-style-type:none}.lst-kix_q97rvvc7c69e-6>li:before{content:"\0025cf "}.lst-kix_6ril5iwt0fcl-1>li:before{content:"\0025cb "}.lst-kix_iv2x96orjh4l-6>li:before{content:"\0025cf "}ol.lst-kix_s3mi7ukxwiwf-3.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-3 0}.lst-kix_6ril5iwt0fcl-5>li:before{content:"\0025a0 "}.lst-kix_iv2x96orjh4l-2>li:before{content:"\0025a0 "}.lst-kix_q8ok0mh9yyto-1>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-1}.lst-kix_s3mi7ukxwiwf-4>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-4,lower-latin) ". "}.lst-kix_5anu1k9tsyak-2>li:before{content:"\0025a0 "}.lst-kix_b7256qmdgo85-0>li:before{content:"\0025cf "}.lst-kix_1wulu3ra2vwv-3>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-3,lower-latin) ") "}.lst-kix_11a9ub9xa97v-7>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-7}.lst-kix_1wulu3ra2vwv-7>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-7,lower-latin) ") "}.lst-kix_1wulu3ra2vwv-6>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-6}.lst-kix_gfromclascha-7>li:before{content:"\0025cb "}.lst-kix_86hsx13ssqid-5>li:before{content:"\0025a0 "}.lst-kix_86hsx13ssqid-1>li:before{content:"\0025cb "}.lst-kix_sholqrhc62dh-5>li:before{content:"\0025a0 "}.lst-kix_7tib3jrzu2u9-0>li:before{content:"\0025cf "}.lst-kix_7tib3jrzu2u9-5>li:before{content:"\0025a0 "}.lst-kix_7tib3jrzu2u9-2>li:before{content:"\0025a0 "}.lst-kix_7tib3jrzu2u9-3>li:before{content:"\0025cf "}ul.lst-kix_ndaonzmgp8vn-4{list-style-type:none}ul.lst-kix_ndaonzmgp8vn-3{list-style-type:none}ul.lst-kix_ndaonzmgp8vn-6{list-style-type:none}ul.lst-kix_ndaonzmgp8vn-5{list-style-type:none}ul.lst-kix_ndaonzmgp8vn-0{list-style-type:none}ul.lst-kix_ndaonzmgp8vn-2{list-style-type:none}.lst-kix_bfzyeb917dp8-4>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-4}ul.lst-kix_ndaonzmgp8vn-1{list-style-type:none}.lst-kix_7tib3jrzu2u9-8>li:before{content:"\0025a0 "}ul.lst-kix_sholqrhc62dh-6{list-style-type:none}ul.lst-kix_sholqrhc62dh-5{list-style-type:none}ul.lst-kix_sholqrhc62dh-4{list-style-type:none}ul.lst-kix_sholqrhc62dh-3{list-style-type:none}ul.lst-kix_sholqrhc62dh-8{list-style-type:none}ul.lst-kix_sholqrhc62dh-7{list-style-type:none}ul.lst-kix_sholqrhc62dh-2{list-style-type:none}ol.lst-kix_bfzyeb917dp8-6.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-6 0}ul.lst-kix_sholqrhc62dh-1{list-style-type:none}ul.lst-kix_sholqrhc62dh-0{list-style-type:none}.lst-kix_emhp84jkv42c-1>li{counter-increment:lst-ctn-kix_emhp84jkv42c-1}ul.lst-kix_ndaonzmgp8vn-8{list-style-type:none}ul.lst-kix_ndaonzmgp8vn-7{list-style-type:none}.lst-kix_j42a5dwgnqyq-8>li:before{content:"\0025a0 "}.lst-kix_5anu1k9tsyak-6>li:before{content:"\0025cf "}ol.lst-kix_bfzyeb917dp8-1.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-1 0}.lst-kix_j42a5dwgnqyq-7>li:before{content:"\0025cb "}.lst-kix_qeqyxe7gm97l-7>li:before{content:"\0025cb "}.lst-kix_qeqyxe7gm97l-8>li:before{content:"\0025a0 "}.lst-kix_5anu1k9tsyak-8>li:before{content:"\0025a0 "}ol.lst-kix_s3mi7ukxwiwf-2.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-2 0}ul.lst-kix_686a8e4qhxwx-4{list-style-type:none}ul.lst-kix_686a8e4qhxwx-3{list-style-type:none}ul.lst-kix_686a8e4qhxwx-2{list-style-type:none}ul.lst-kix_686a8e4qhxwx-1{list-style-type:none}ul.lst-kix_686a8e4qhxwx-8{list-style-type:none}ul.lst-kix_686a8e4qhxwx-7{list-style-type:none}ul.lst-kix_686a8e4qhxwx-6{list-style-type:none}.lst-kix_6qnkx7t1adn9-0>li:before{content:"\0025cf "}.lst-kix_qeqyxe7gm97l-0>li:before{content:"\0025cf "}ul.lst-kix_686a8e4qhxwx-5{list-style-type:none}.lst-kix_6qnkx7t1adn9-1>li:before{content:"\0025cb "}.lst-kix_6qnkx7t1adn9-3>li:before{content:"\0025cf "}ul.lst-kix_686a8e4qhxwx-0{list-style-type:none}.lst-kix_bfzyeb917dp8-0>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-0}.lst-kix_qeqyxe7gm97l-5>li:before{content:"\0025a0 "}.lst-kix_j42a5dwgnqyq-0>li:before{content:"\0025cf "}.lst-kix_j42a5dwgnqyq-2>li:before{content:"\0025a0 "}.lst-kix_j42a5dwgnqyq-5>li:before{content:"\0025a0 "}ol.lst-kix_emhp84jkv42c-3.start{counter-reset:lst-ctn-kix_emhp84jkv42c-3 0}.lst-kix_qeqyxe7gm97l-2>li:before{content:"\0025a0 "}.lst-kix_emhp84jkv42c-8>li{counter-increment:lst-ctn-kix_emhp84jkv42c-8}ol.lst-kix_s3mi7ukxwiwf-0.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-0 0}.lst-kix_86hsx13ssqid-8>li:before{content:"\0025a0 "}.lst-kix_s3mi7ukxwiwf-0>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-0}.lst-kix_fd1rucpc9vz2-5>li:before{content:"\0025a0 "}.lst-kix_fd1rucpc9vz2-7>li:before{content:"\0025cb "}.lst-kix_iv2x96orjh4l-5>li:before{content:"\0025a0 "}.lst-kix_iv2x96orjh4l-3>li:before{content:"\0025cf "}.lst-kix_5anu1k9tsyak-3>li:before{content:"\0025cf "}.lst-kix_5anu1k9tsyak-5>li:before{content:"\0025a0 "}.lst-kix_1wulu3ra2vwv-8>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-8}ol.lst-kix_q8ok0mh9yyto-6{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-5{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-4{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-3{list-style-type:none}.lst-kix_9jxnjym0nges-0>li:before{content:"\0025cf "}ol.lst-kix_q8ok0mh9yyto-8{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-7{list-style-type:none}ol.lst-kix_emhp84jkv42c-0.start{counter-reset:lst-ctn-kix_emhp84jkv42c-0 0}.lst-kix_opi66v2qdsjs-8>li:before{content:"\0025a0 "}.lst-kix_ndaonzmgp8vn-4>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-2{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-1{list-style-type:none}.lst-kix_9jxnjym0nges-2>li:before{content:"\0025a0 "}ol.lst-kix_q8ok0mh9yyto-0{list-style-type:none}.lst-kix_ndaonzmgp8vn-2>li:before{content:"\0025a0 "}.lst-kix_opi66v2qdsjs-2>li:before{content:"\0025a0 "}.lst-kix_11a9ub9xa97v-0>li:before{content:"" counter(lst-ctn-kix_11a9ub9xa97v-0,upper-roman) ". "}ol.lst-kix_bfzyeb917dp8-4.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-4 0}.lst-kix_q8ok0mh9yyto-3>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-3}.lst-kix_11a9ub9xa97v-8>li:before{content:"(" counter(lst-ctn-kix_11a9ub9xa97v-8,lower-roman) ") "}.lst-kix_86hsx13ssqid-0>li:before{content:"\0025cf "}.lst-kix_86hsx13ssqid-6>li:before{content:"\0025cf "}.lst-kix_9jxnjym0nges-8>li:before{content:"\0025a0 "}.lst-kix_11a9ub9xa97v-6>li:before{content:"(" counter(lst-ctn-kix_11a9ub9xa97v-6,lower-roman) ") "}ul.lst-kix_9jxnjym0nges-2{list-style-type:none}.lst-kix_xv318blpjdo-2>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-2,decimal) ". "}ul.lst-kix_9jxnjym0nges-1{list-style-type:none}ul.lst-kix_9jxnjym0nges-4{list-style-type:none}ul.lst-kix_9jxnjym0nges-3{list-style-type:none}.lst-kix_h0kibz3smj6t-5>li:before{content:"\0025a0 "}ul.lst-kix_9jxnjym0nges-6{list-style-type:none}ul.lst-kix_9jxnjym0nges-5{list-style-type:none}ul.lst-kix_9jxnjym0nges-8{list-style-type:none}ul.lst-kix_9jxnjym0nges-7{list-style-type:none}.lst-kix_xv318blpjdo-2>li{counter-increment:lst-ctn-kix_xv318blpjdo-2}.lst-kix_s3mi7ukxwiwf-3>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-3}.lst-kix_bfzyeb917dp8-7>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-7}ol.lst-kix_emhp84jkv42c-1.start{counter-reset:lst-ctn-kix_emhp84jkv42c-1 0}.lst-kix_jkgkf1u9sy0c-4>li:before{content:"\0025cb "}ol.lst-kix_jj5w63toozfm-3{list-style-type:none}ol.lst-kix_jj5w63toozfm-4{list-style-type:none}ol.lst-kix_jj5w63toozfm-5{list-style-type:none}.lst-kix_jkgkf1u9sy0c-7>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-7>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-7,lower-latin) ") "}.lst-kix_686a8e4qhxwx-5>li:before{content:"\0025a0 "}ol.lst-kix_jj5w63toozfm-6{list-style-type:none}ol.lst-kix_jj5w63toozfm-7{list-style-type:none}ol.lst-kix_jj5w63toozfm-8{list-style-type:none}.lst-kix_h0kibz3smj6t-2>li:before{content:"\0025a0 "}.lst-kix_q8ok0mh9yyto-7>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-7}ol.lst-kix_jj5w63toozfm-0{list-style-type:none}ol.lst-kix_jj5w63toozfm-1{list-style-type:none}.lst-kix_686a8e4qhxwx-2>li:before{content:"\0025a0 "}ol.lst-kix_jj5w63toozfm-2{list-style-type:none}ol.lst-kix_11a9ub9xa97v-3{list-style-type:none}ol.lst-kix_bfzyeb917dp8-8.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-8 0}ol.lst-kix_11a9ub9xa97v-2{list-style-type:none}ol.lst-kix_11a9ub9xa97v-1{list-style-type:none}ol.lst-kix_11a9ub9xa97v-0{list-style-type:none}.lst-kix_b7256qmdgo85-2>li:before{content:"\0025a0 "}ol.lst-kix_11a9ub9xa97v-8{list-style-type:none}ol.lst-kix_11a9ub9xa97v-7{list-style-type:none}ol.lst-kix_11a9ub9xa97v-6{list-style-type:none}ol.lst-kix_11a9ub9xa97v-5{list-style-type:none}.lst-kix_4m04az9jmmj8-7>li:before{content:"\0025cb "}ol.lst-kix_11a9ub9xa97v-4{list-style-type:none}.lst-kix_b7256qmdgo85-5>li:before{content:"\0025a0 "}.lst-kix_q97rvvc7c69e-1>li:before{content:"\0025cb "}.lst-kix_q8ok0mh9yyto-5>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-5,lower-latin) ") "}.lst-kix_4m04az9jmmj8-4>li:before{content:"\0025cb "}.lst-kix_bfzyeb917dp8-3>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-3}.lst-kix_q8ok0mh9yyto-8>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-8,lower-roman) ") "}ol.lst-kix_bfzyeb917dp8-5{list-style-type:none}ol.lst-kix_bfzyeb917dp8-4{list-style-type:none}ol.lst-kix_bfzyeb917dp8-3{list-style-type:none}ol.lst-kix_bfzyeb917dp8-2{list-style-type:none}ol.lst-kix_bfzyeb917dp8-8{list-style-type:none}ol.lst-kix_bfzyeb917dp8-7{list-style-type:none}ol.lst-kix_bfzyeb917dp8-6{list-style-type:none}ol.lst-kix_bfzyeb917dp8-1{list-style-type:none}ol.lst-kix_bfzyeb917dp8-0{list-style-type:none}ol.lst-kix_emhp84jkv42c-5.start{counter-reset:lst-ctn-kix_emhp84jkv42c-5 0}.lst-kix_q8ok0mh9yyto-0>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-0}.lst-kix_iv2x96orjh4l-8>li:before{content:"\0025a0 "}.lst-kix_6qnkx7t1adn9-6>li:before{content:"\0025cf "}.lst-kix_gfromclascha-5>li:before{content:"\0025a0 "}ol.lst-kix_emhp84jkv42c-8.start{counter-reset:lst-ctn-kix_emhp84jkv42c-8 0}.lst-kix_11a9ub9xa97v-8>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-8}.lst-kix_x1epm4iu41dp-5>li:before{content:"\0025a0 "}.lst-kix_q97rvvc7c69e-4>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-5>li{counter-increment:lst-ctn-kix_xv318blpjdo-5}.lst-kix_1wulu3ra2vwv-5>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-5}.lst-kix_5anu1k9tsyak-0>li:before{content:"\0025cf "}.lst-kix_fy6y7gyjejoh-4>li:before{content:"\0025cb "}.lst-kix_ndaonzmgp8vn-7>li:before{content:"\0025cb "}.lst-kix_iv2x96orjh4l-0>li:before{content:"\0025cf "}.lst-kix_s3mi7ukxwiwf-2>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-2,lower-roman) ". "}.lst-kix_6ril5iwt0fcl-7>li:before{content:"\0025cb "}.lst-kix_emhp84jkv42c-4>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-4,decimal) ") "}.lst-kix_emhp84jkv42c-5>li{counter-increment:lst-ctn-kix_emhp84jkv42c-5}.lst-kix_jj5w63toozfm-4>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-4,lower-latin) ". "}.lst-kix_1wulu3ra2vwv-5>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-5,lower-latin) ") "}ul.lst-kix_u0uqs69v9qbh-8{list-style-type:none}.lst-kix_11a9ub9xa97v-1>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-1}ul.lst-kix_u0uqs69v9qbh-4{list-style-type:none}ul.lst-kix_u0uqs69v9qbh-5{list-style-type:none}.lst-kix_opi66v2qdsjs-5>li:before{content:"\0025a0 "}ul.lst-kix_u0uqs69v9qbh-6{list-style-type:none}ul.lst-kix_u0uqs69v9qbh-7{list-style-type:none}ul.lst-kix_u0uqs69v9qbh-0{list-style-type:none}ul.lst-kix_gyhqddkw9i05-6{list-style-type:none}ul.lst-kix_u0uqs69v9qbh-1{list-style-type:none}ul.lst-kix_gyhqddkw9i05-5{list-style-type:none}ul.lst-kix_u0uqs69v9qbh-2{list-style-type:none}ul.lst-kix_gyhqddkw9i05-8{list-style-type:none}ul.lst-kix_u0uqs69v9qbh-3{list-style-type:none}ul.lst-kix_gyhqddkw9i05-7{list-style-type:none}ol.lst-kix_emhp84jkv42c-7.start{counter-reset:lst-ctn-kix_emhp84jkv42c-7 0}.lst-kix_fd1rucpc9vz2-2>li:before{content:"\0025a0 "}ul.lst-kix_jkgkf1u9sy0c-4{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-5{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-6{list-style-type:none}ul.lst-kix_gyhqddkw9i05-0{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-7{list-style-type:none}.lst-kix_bfzyeb917dp8-2>li:before{content:"" counter(lst-ctn-kix_bfzyeb917dp8-2,decimal) ". "}.lst-kix_gyhqddkw9i05-1>li:before{content:"\0025cb "}ul.lst-kix_jkgkf1u9sy0c-8{list-style-type:none}ul.lst-kix_gyhqddkw9i05-2{list-style-type:none}ul.lst-kix_gyhqddkw9i05-1{list-style-type:none}ul.lst-kix_gyhqddkw9i05-4{list-style-type:none}ul.lst-kix_gyhqddkw9i05-3{list-style-type:none}.lst-kix_11a9ub9xa97v-3>li:before{content:"" counter(lst-ctn-kix_11a9ub9xa97v-3,lower-latin) ") "}ol.lst-kix_emhp84jkv42c-6.start{counter-reset:lst-ctn-kix_emhp84jkv42c-6 0}.lst-kix_8p26nc4xx5n8-1>li:before{content:"\0025cb "}.lst-kix_u0uqs69v9qbh-5>li:before{content:"\0025a0 "}.lst-kix_9jxnjym0nges-5>li:before{content:"\0025a0 "}.lst-kix_86hsx13ssqid-3>li:before{content:"\0025cf "}ul.lst-kix_jkgkf1u9sy0c-0{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-1{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-2{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-3{list-style-type:none}ol.lst-kix_jj5w63toozfm-5.start{counter-reset:lst-ctn-kix_jj5w63toozfm-5 0}ul.lst-kix_bijol4nzhwf0-8{list-style-type:none}ol.lst-kix_emhp84jkv42c-0{list-style-type:none}ol.lst-kix_emhp84jkv42c-2{list-style-type:none}ol.lst-kix_emhp84jkv42c-1{list-style-type:none}ol.lst-kix_emhp84jkv42c-4{list-style-type:none}ol.lst-kix_emhp84jkv42c-3{list-style-type:none}ol.lst-kix_emhp84jkv42c-6{list-style-type:none}ol.lst-kix_emhp84jkv42c-5{list-style-type:none}ol.lst-kix_emhp84jkv42c-8{list-style-type:none}ol.lst-kix_emhp84jkv42c-7{list-style-type:none}.lst-kix_11a9ub9xa97v-4>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-4}.lst-kix_1qz6dmm9b14l-4>li:before{content:"\0025cb "}.lst-kix_1qz6dmm9b14l-3>li:before{content:"\0025cf "}.lst-kix_1qz6dmm9b14l-5>li:before{content:"\0025a0 "}ul.lst-kix_j42a5dwgnqyq-0{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-1{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-2{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-3{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-4{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-5{list-style-type:none}.lst-kix_1qz6dmm9b14l-0>li:before{content:"\0025cf "}.lst-kix_1qz6dmm9b14l-8>li:before{content:"\0025a0 "}ul.lst-kix_j42a5dwgnqyq-6{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-7{list-style-type:none}.lst-kix_1qz6dmm9b14l-1>li:before{content:"\0025cb "}ol.lst-kix_bfzyeb917dp8-0.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-0 0}ul.lst-kix_j42a5dwgnqyq-8{list-style-type:none}.lst-kix_1qz6dmm9b14l-2>li:before{content:"\0025a0 "}ul.lst-kix_bijol4nzhwf0-4{list-style-type:none}ul.lst-kix_bijol4nzhwf0-5{list-style-type:none}ul.lst-kix_bijol4nzhwf0-6{list-style-type:none}ul.lst-kix_bijol4nzhwf0-7{list-style-type:none}ul.lst-kix_bijol4nzhwf0-0{list-style-type:none}ul.lst-kix_bijol4nzhwf0-1{list-style-type:none}ul.lst-kix_bijol4nzhwf0-2{list-style-type:none}ul.lst-kix_bijol4nzhwf0-3{list-style-type:none}.lst-kix_jj5w63toozfm-3>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-3,decimal) ". "}.lst-kix_1wulu3ra2vwv-2>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-2}.lst-kix_1qz6dmm9b14l-7>li:before{content:"\0025cb "}.lst-kix_jj5w63toozfm-2>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-2,lower-roman) ". "}.lst-kix_q8ok0mh9yyto-4>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-4}.lst-kix_1qz6dmm9b14l-6>li:before{content:"\0025cf "}.lst-kix_jj5w63toozfm-1>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-1,lower-latin) ". "}.lst-kix_h9mjmxara98n-7>li:before{content:"\0025cb "}.lst-kix_bijol4nzhwf0-7>li:before{content:"\0025cb "}.lst-kix_jj5w63toozfm-0>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-0,decimal) ". "}.lst-kix_h9mjmxara98n-6>li:before{content:"\0025cf "}.lst-kix_h9mjmxara98n-8>li:before{content:"\0025a0 "}.lst-kix_bijol4nzhwf0-6>li:before{content:"\0025cf "}.lst-kix_bijol4nzhwf0-8>li:before{content:"\0025a0 "}.lst-kix_h9mjmxara98n-5>li:before{content:"\0025a0 "}.lst-kix_bijol4nzhwf0-5>li:before{content:"\0025a0 "}.lst-kix_h9mjmxara98n-3>li:before{content:"\0025cf "}.lst-kix_bijol4nzhwf0-3>li:before{content:"\0025cf "}.lst-kix_h9mjmxara98n-2>li:before{content:"\0025a0 "}.lst-kix_h9mjmxara98n-4>li:before{content:"\0025cb "}.lst-kix_bijol4nzhwf0-2>li:before{content:"\0025a0 "}.lst-kix_bijol4nzhwf0-4>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-1.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-1 0}ol.lst-kix_s3mi7ukxwiwf-1.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-1 0}.lst-kix_h9mjmxara98n-0>li:before{content:"\0025cf "}.lst-kix_bijol4nzhwf0-0>li:before{content:"\0025cf "}.lst-kix_h9mjmxara98n-1>li:before{content:"\0025cb "}.lst-kix_bijol4nzhwf0-1>li:before{content:"\0025cb "}.lst-kix_jj5w63toozfm-8>li{counter-increment:lst-ctn-kix_jj5w63toozfm-8}ul.lst-kix_gfromclascha-0{list-style-type:none}ol.lst-kix_emhp84jkv42c-2.start{counter-reset:lst-ctn-kix_emhp84jkv42c-2 0}ul.lst-kix_gfromclascha-7{list-style-type:none}ul.lst-kix_gfromclascha-8{list-style-type:none}ul.lst-kix_gfromclascha-5{list-style-type:none}ul.lst-kix_gfromclascha-6{list-style-type:none}.lst-kix_x1epm4iu41dp-8>li:before{content:"\0025a0 "}ul.lst-kix_gfromclascha-3{list-style-type:none}ul.lst-kix_gfromclascha-4{list-style-type:none}ul.lst-kix_gfromclascha-1{list-style-type:none}.lst-kix_bfzyeb917dp8-3>li:before{content:"" counter(lst-ctn-kix_bfzyeb917dp8-3,lower-latin) ") "}ul.lst-kix_gfromclascha-2{list-style-type:none}ul.lst-kix_opi66v2qdsjs-3{list-style-type:none}.lst-kix_x1epm4iu41dp-2>li:before{content:"\0025a0 "}ol.lst-kix_11a9ub9xa97v-1.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-1 0}ul.lst-kix_opi66v2qdsjs-2{list-style-type:none}ul.lst-kix_opi66v2qdsjs-1{list-style-type:none}ul.lst-kix_opi66v2qdsjs-0{list-style-type:none}.lst-kix_q8ok0mh9yyto-0>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-0,upper-roman) ". "}.lst-kix_fy6y7gyjejoh-3>li:before{content:"\0025cf "}.lst-kix_bfzyeb917dp8-5>li:before{content:"(" counter(lst-ctn-kix_bfzyeb917dp8-5,lower-latin) ") "}.lst-kix_jj5w63toozfm-0>li{counter-increment:lst-ctn-kix_jj5w63toozfm-0}.lst-kix_x1epm4iu41dp-6>li:before{content:"\0025cf "}ul.lst-kix_opi66v2qdsjs-8{list-style-type:none}ul.lst-kix_opi66v2qdsjs-7{list-style-type:none}.lst-kix_x1epm4iu41dp-4>li:before{content:"\0025cb "}ul.lst-kix_opi66v2qdsjs-6{list-style-type:none}ul.lst-kix_opi66v2qdsjs-5{list-style-type:none}.lst-kix_fy6y7gyjejoh-1>li:before{content:"\0025cb "}ul.lst-kix_opi66v2qdsjs-4{list-style-type:none}.lst-kix_bfzyeb917dp8-7>li:before{content:"(" counter(lst-ctn-kix_bfzyeb917dp8-7,lower-latin) ") "}.lst-kix_emhp84jkv42c-7>li{counter-increment:lst-ctn-kix_emhp84jkv42c-7}.lst-kix_fy6y7gyjejoh-7>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-1.start{counter-reset:lst-ctn-kix_xv318blpjdo-1 0}.lst-kix_x1epm4iu41dp-0>li:before{content:"\0025cf "}.lst-kix_fy6y7gyjejoh-5>li:before{content:"\0025a0 "}.lst-kix_emhp84jkv42c-3>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-3,lower-latin) ") "}.lst-kix_emhp84jkv42c-5>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-5,lower-latin) ") "}.lst-kix_emhp84jkv42c-1>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-1,upper-latin) ". "}.lst-kix_emhp84jkv42c-7>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-7,lower-latin) ") "}.lst-kix_jj5w63toozfm-5>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-5,lower-roman) ". "}.lst-kix_bfzyeb917dp8-2>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-2}ul.lst-kix_1qz6dmm9b14l-6{list-style-type:none}.lst-kix_gyhqddkw9i05-8>li:before{content:"\0025a0 "}.lst-kix_jj5w63toozfm-7>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-7,lower-latin) ". "}ul.lst-kix_1qz6dmm9b14l-5{list-style-type:none}ul.lst-kix_1qz6dmm9b14l-8{list-style-type:none}ul.lst-kix_1qz6dmm9b14l-7{list-style-type:none}.lst-kix_jj5w63toozfm-1>li{counter-increment:lst-ctn-kix_jj5w63toozfm-1}ul.lst-kix_1qz6dmm9b14l-2{list-style-type:none}ul.lst-kix_1qz6dmm9b14l-1{list-style-type:none}ul.lst-kix_1qz6dmm9b14l-4{list-style-type:none}ul.lst-kix_1qz6dmm9b14l-3{list-style-type:none}.lst-kix_u0uqs69v9qbh-0>li:before{content:"\0025cf "}.lst-kix_u0uqs69v9qbh-4>li:before{content:"\0025cb "}ul.lst-kix_1qz6dmm9b14l-0{list-style-type:none}.lst-kix_s3mi7ukxwiwf-6>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-6}.lst-kix_jj5w63toozfm-7>li{counter-increment:lst-ctn-kix_jj5w63toozfm-7}.lst-kix_u0uqs69v9qbh-2>li:before{content:"\0025a0 "}.lst-kix_8p26nc4xx5n8-6>li:before{content:"\0025cf "}ol.lst-kix_1wulu3ra2vwv-4.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-4 0}.lst-kix_u0uqs69v9qbh-8>li:before{content:"\0025a0 "}.lst-kix_bfzyeb917dp8-1>li:before{content:"" counter(lst-ctn-kix_bfzyeb917dp8-1,upper-latin) ". "}ul.lst-kix_ekmayt81kvbz-8{list-style-type:none}.lst-kix_gyhqddkw9i05-0>li:before{content:"\0025cf "}ul.lst-kix_ekmayt81kvbz-7{list-style-type:none}.lst-kix_gyhqddkw9i05-4>li:before{content:"\0025cb "}.lst-kix_8p26nc4xx5n8-2>li:before{content:"\0025a0 "}ul.lst-kix_ekmayt81kvbz-6{list-style-type:none}ul.lst-kix_ekmayt81kvbz-5{list-style-type:none}ul.lst-kix_ekmayt81kvbz-4{list-style-type:none}ul.lst-kix_ekmayt81kvbz-3{list-style-type:none}.lst-kix_gyhqddkw9i05-6>li:before{content:"\0025cf "}.lst-kix_8p26nc4xx5n8-0>li:before{content:"\0025cf "}.lst-kix_8p26nc4xx5n8-8>li:before{content:"\0025a0 "}ul.lst-kix_ekmayt81kvbz-2{list-style-type:none}ul.lst-kix_ekmayt81kvbz-1{list-style-type:none}.lst-kix_u0uqs69v9qbh-6>li:before{content:"\0025cf "}ul.lst-kix_ekmayt81kvbz-0{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-1{list-style-type:none}ol.lst-kix_bfzyeb917dp8-5.start{counter-reset:lst-ctn-kix_bfzyeb917dp8-5 0}ol.lst-kix_s3mi7ukxwiwf-2{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-0{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-1{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-5{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-2{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-6{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-3{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-0{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-4{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-5{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-6{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-3{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-7{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-4{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-8{list-style-type:none}.lst-kix_gyhqddkw9i05-2>li:before{content:"\0025a0 "}.lst-kix_8p26nc4xx5n8-4>li:before{content:"\0025cb "}ul.lst-kix_7tib3jrzu2u9-7{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-8{list-style-type:none}.lst-kix_xv318blpjdo-1>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-1,upper-latin) ". "}.lst-kix_xv318blpjdo-0>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-0,upper-roman) ". "}.lst-kix_h0kibz3smj6t-4>li:before{content:"\0025cb "}.lst-kix_1wulu3ra2vwv-1>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-1}.lst-kix_h0kibz3smj6t-7>li:before{content:"\0025cb "}.lst-kix_h0kibz3smj6t-8>li:before{content:"\0025a0 "}.lst-kix_11a9ub9xa97v-5>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-5}.lst-kix_opi66v2qdsjs-0>li:before{content:"\0025cf "}.lst-kix_686a8e4qhxwx-7>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-8>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-8,lower-roman) ") "}.lst-kix_xv318blpjdo-5>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-5,lower-latin) ") "}.lst-kix_h0kibz3smj6t-3>li:before{content:"\0025cf "}.lst-kix_686a8e4qhxwx-3>li:before{content:"\0025cf "}.lst-kix_686a8e4qhxwx-4>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-4>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-4,decimal) ") "}.lst-kix_h0kibz3smj6t-0>li:before{content:"\0025cf "}ol.lst-kix_11a9ub9xa97v-0.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-0 0}.lst-kix_s3mi7ukxwiwf-7>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-7}.lst-kix_q8ok0mh9yyto-3>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-3,lower-latin) ") "}.lst-kix_686a8e4qhxwx-8>li:before{content:"\0025a0 "}.lst-kix_emhp84jkv42c-6>li{counter-increment:lst-ctn-kix_emhp84jkv42c-6}.lst-kix_q8ok0mh9yyto-2>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-2,decimal) ". "}.lst-kix_q8ok0mh9yyto-6>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-6,lower-roman) ") "}.lst-kix_4m04az9jmmj8-2>li:before{content:"\0025a0 "}.lst-kix_4m04az9jmmj8-6>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-7>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-7,lower-latin) ") "}.lst-kix_4m04az9jmmj8-1>li:before{content:"\0025cb "}.lst-kix_4m04az9jmmj8-5>li:before{content:"\0025a0 "}.lst-kix_xv318blpjdo-4>li{counter-increment:lst-ctn-kix_xv318blpjdo-4}.lst-kix_x1epm4iu41dp-7>li:before{content:"\0025cb "}.lst-kix_6qnkx7t1adn9-4>li:before{content:"\0025cb "}.lst-kix_6qnkx7t1adn9-8>li:before{content:"\0025a0 "}.lst-kix_xv318blpjdo-6>li{counter-increment:lst-ctn-kix_xv318blpjdo-6}ol.lst-kix_jj5w63toozfm-1.start{counter-reset:lst-ctn-kix_jj5w63toozfm-1 0}.lst-kix_bfzyeb917dp8-4>li:before{content:"(" counter(lst-ctn-kix_bfzyeb917dp8-4,decimal) ") "}.lst-kix_x1epm4iu41dp-3>li:before{content:"\0025cf "}.lst-kix_fd1rucpc9vz2-8>li:before{content:"\0025a0 "}.lst-kix_bfzyeb917dp8-8>li:before{content:"(" counter(lst-ctn-kix_bfzyeb917dp8-8,lower-roman) ") "}.lst-kix_fy6y7gyjejoh-2>li:before{content:"\0025a0 "}.lst-kix_fy6y7gyjejoh-6>li:before{content:"\0025cf "}ol.lst-kix_1wulu3ra2vwv-8.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-8 0}ol.lst-kix_s3mi7ukxwiwf-6.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-6 0}ol.lst-kix_jj5w63toozfm-0.start{counter-reset:lst-ctn-kix_jj5w63toozfm-0 0}.lst-kix_emhp84jkv42c-2>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-2,decimal) ". "}.lst-kix_emhp84jkv42c-6>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-6,lower-roman) ") "}.lst-kix_686a8e4qhxwx-0>li:before{content:"\0025cf "}.lst-kix_ndaonzmgp8vn-5>li:before{content:"\0025a0 "}.lst-kix_9jxnjym0nges-3>li:before{content:"\0025cf "}.lst-kix_opi66v2qdsjs-7>li:before{content:"\0025cb "}.lst-kix_ndaonzmgp8vn-1>li:before{content:"\0025cb "}.lst-kix_gyhqddkw9i05-7>li:before{content:"\0025cb "}.lst-kix_jj5w63toozfm-6>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-6,decimal) ". "}.lst-kix_u0uqs69v9qbh-3>li:before{content:"\0025cf "}.lst-kix_fd1rucpc9vz2-0>li:before{content:"\0025cf "}.lst-kix_fd1rucpc9vz2-4>li:before{content:"\0025cb "}ol.lst-kix_s3mi7ukxwiwf-5.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-5 0}.lst-kix_opi66v2qdsjs-3>li:before{content:"\0025cf "}.lst-kix_8p26nc4xx5n8-3>li:before{content:"\0025cf "}.lst-kix_8p26nc4xx5n8-7>li:before{content:"\0025cb "}.lst-kix_u0uqs69v9qbh-7>li:before{content:"\0025cb "}.lst-kix_bfzyeb917dp8-0>li:before{content:"" counter(lst-ctn-kix_bfzyeb917dp8-0,upper-roman) ". "}.lst-kix_11a9ub9xa97v-1>li:before{content:"" counter(lst-ctn-kix_11a9ub9xa97v-1,upper-latin) ". "}.lst-kix_gyhqddkw9i05-3>li:before{content:"\0025cf "}.lst-kix_9jxnjym0nges-7>li:before{content:"\0025cb "}.lst-kix_11a9ub9xa97v-5>li:before{content:"(" counter(lst-ctn-kix_11a9ub9xa97v-5,lower-latin) ") "}ol.lst-kix_11a9ub9xa97v-2.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-2 0}.lst-kix_s3mi7ukxwiwf-4>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-4}.lst-kix_7tib3jrzu2u9-4>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-4{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-7.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-7 0}ol.lst-kix_xv318blpjdo-5{list-style-type:none}ol.lst-kix_xv318blpjdo-6{list-style-type:none}ol.lst-kix_xv318blpjdo-7{list-style-type:none}ol.lst-kix_xv318blpjdo-0{list-style-type:none}ol.lst-kix_xv318blpjdo-1{list-style-type:none}.lst-kix_7tib3jrzu2u9-1>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-2{list-style-type:none}ol.lst-kix_xv318blpjdo-3{list-style-type:none}.lst-kix_q8ok0mh9yyto-6>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-6}ul.lst-kix_5anu1k9tsyak-7{list-style-type:none}.lst-kix_jj5w63toozfm-6>li{counter-increment:lst-ctn-kix_jj5w63toozfm-6}ul.lst-kix_5anu1k9tsyak-8{list-style-type:none}ul.lst-kix_5anu1k9tsyak-5{list-style-type:none}.lst-kix_11a9ub9xa97v-2>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-2}ol.lst-kix_1wulu3ra2vwv-0{list-style-type:none}ul.lst-kix_5anu1k9tsyak-6{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-1{list-style-type:none}ul.lst-kix_5anu1k9tsyak-3{list-style-type:none}ul.lst-kix_5anu1k9tsyak-4{list-style-type:none}ul.lst-kix_5anu1k9tsyak-1{list-style-type:none}.lst-kix_xv318blpjdo-3>li{counter-increment:lst-ctn-kix_xv318blpjdo-3}ul.lst-kix_5anu1k9tsyak-2{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-6{list-style-type:none}ul.lst-kix_5anu1k9tsyak-0{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-7{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-8{list-style-type:none}.lst-kix_7tib3jrzu2u9-6>li:before{content:"\0025cf "}ol.lst-kix_xv318blpjdo-8{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-2{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-3{list-style-type:none}.lst-kix_7tib3jrzu2u9-7>li:before{content:"\0025cb "}ol.lst-kix_1wulu3ra2vwv-4{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-5{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-5.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-5 0}ol.lst-kix_11a9ub9xa97v-7.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-7 0}ol.lst-kix_q8ok0mh9yyto-2.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-2 0}.lst-kix_11a9ub9xa97v-6>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-6}.lst-kix_5anu1k9tsyak-7>li:before{content:"\0025cb "}.lst-kix_j42a5dwgnqyq-6>li:before{content:"\0025cf "}ol.lst-kix_xv318blpjdo-2.start{counter-reset:lst-ctn-kix_xv318blpjdo-2 0}.lst-kix_1wulu3ra2vwv-4>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-4}.lst-kix_q8ok0mh9yyto-2>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-2}ol.lst-kix_jj5w63toozfm-4.start{counter-reset:lst-ctn-kix_jj5w63toozfm-4 0}.lst-kix_6qnkx7t1adn9-2>li:before{content:"\0025a0 "}.lst-kix_qeqyxe7gm97l-6>li:before{content:"\0025cf "}.lst-kix_qeqyxe7gm97l-4>li:before{content:"\0025cb "}.lst-kix_j42a5dwgnqyq-1>li:before{content:"\0025cb "}.lst-kix_qeqyxe7gm97l-1>li:before{content:"\0025cb "}.lst-kix_qeqyxe7gm97l-3>li:before{content:"\0025cf "}.lst-kix_j42a5dwgnqyq-4>li:before{content:"\0025cb "}ol.lst-kix_1wulu3ra2vwv-0.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-0 0}ol.lst-kix_q8ok0mh9yyto-7.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-7 0}.lst-kix_j42a5dwgnqyq-3>li:before{content:"\0025cf "}.lst-kix_jj5w63toozfm-2>li{counter-increment:lst-ctn-kix_jj5w63toozfm-2}ol.lst-kix_1wulu3ra2vwv-3.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-3 0}ol.lst-kix_q8ok0mh9yyto-0.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-0 0}.lst-kix_6qnkx7t1adn9-5>li:before{content:"\0025a0 "}.lst-kix_6qnkx7t1adn9-7>li:before{content:"\0025cb "}.lst-kix_iv2x96orjh4l-7>li:before{content:"\0025cb "}.lst-kix_ndaonzmgp8vn-6>li:before{content:"\0025cf "}.lst-kix_5anu1k9tsyak-1>li:before{content:"\0025cb "}.lst-kix_ndaonzmgp8vn-8>li:before{content:"\0025a0 "}.lst-kix_iv2x96orjh4l-1>li:before{content:"\0025cb "}.lst-kix_bfzyeb917dp8-8>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-8}.lst-kix_opi66v2qdsjs-4>li:before{content:"\0025cb "}.lst-kix_fd1rucpc9vz2-1>li:before{content:"\0025cb "}.lst-kix_fd1rucpc9vz2-3>li:before{content:"\0025cf "}.lst-kix_opi66v2qdsjs-6>li:before{content:"\0025cf "}ol.lst-kix_11a9ub9xa97v-4.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-4 0}.lst-kix_bfzyeb917dp8-1>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-1}.lst-kix_ndaonzmgp8vn-0>li:before{content:"\0025cf "}ol.lst-kix_jj5w63toozfm-6.start{counter-reset:lst-ctn-kix_jj5w63toozfm-6 0}ul.lst-kix_b7256qmdgo85-6{list-style-type:none}ul.lst-kix_b7256qmdgo85-7{list-style-type:none}.lst-kix_1wulu3ra2vwv-3>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-3}ul.lst-kix_b7256qmdgo85-8{list-style-type:none}ul.lst-kix_b7256qmdgo85-2{list-style-type:none}ul.lst-kix_b7256qmdgo85-3{list-style-type:none}.lst-kix_11a9ub9xa97v-4>li:before{content:"(" counter(lst-ctn-kix_11a9ub9xa97v-4,decimal) ") "}ul.lst-kix_b7256qmdgo85-4{list-style-type:none}ul.lst-kix_b7256qmdgo85-5{list-style-type:none}.lst-kix_11a9ub9xa97v-2>li:before{content:"" counter(lst-ctn-kix_11a9ub9xa97v-2,decimal) ". "}.lst-kix_9jxnjym0nges-4>li:before{content:"\0025cb "}.lst-kix_86hsx13ssqid-4>li:before{content:"\0025cb "}ul.lst-kix_b7256qmdgo85-0{list-style-type:none}ul.lst-kix_b7256qmdgo85-1{list-style-type:none}.lst-kix_9jxnjym0nges-6>li:before{content:"\0025cf "}.lst-kix_86hsx13ssqid-2>li:before{content:"\0025a0 "}.lst-kix_bfzyeb917dp8-5>li{counter-increment:lst-ctn-kix_bfzyeb917dp8-5}.lst-kix_h0kibz3smj6t-6>li:before{content:"\0025cf "}.lst-kix_11a9ub9xa97v-3>li{counter-increment:lst-ctn-kix_11a9ub9xa97v-3}ol.lst-kix_11a9ub9xa97v-5.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-5 0}ul.lst-kix_x1epm4iu41dp-0{list-style-type:none}ol.lst-kix_xv318blpjdo-0.start{counter-reset:lst-ctn-kix_xv318blpjdo-0 0}.lst-kix_jkgkf1u9sy0c-0>li:before{content:"\0025cf "}ul.lst-kix_x1epm4iu41dp-6{list-style-type:none}ul.lst-kix_x1epm4iu41dp-5{list-style-type:none}ul.lst-kix_x1epm4iu41dp-8{list-style-type:none}.lst-kix_jkgkf1u9sy0c-3>li:before{content:"\0025cf "}ul.lst-kix_x1epm4iu41dp-7{list-style-type:none}ul.lst-kix_x1epm4iu41dp-2{list-style-type:none}.lst-kix_xv318blpjdo-0>li{counter-increment:lst-ctn-kix_xv318blpjdo-0}ol.lst-kix_jj5w63toozfm-8.start{counter-reset:lst-ctn-kix_jj5w63toozfm-8 0}ul.lst-kix_x1epm4iu41dp-1{list-style-type:none}ul.lst-kix_x1epm4iu41dp-4{list-style-type:none}ul.lst-kix_x1epm4iu41dp-3{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-2.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-2 0}.lst-kix_686a8e4qhxwx-6>li:before{content:"\0025cf "}.lst-kix_s3mi7ukxwiwf-5>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-5}.lst-kix_h0kibz3smj6t-1>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-6>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-6,lower-roman) ") "}ul.lst-kix_86hsx13ssqid-3{list-style-type:none}ul.lst-kix_86hsx13ssqid-2{list-style-type:none}.lst-kix_xv318blpjdo-3>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-3,lower-latin) ") "}ul.lst-kix_86hsx13ssqid-5{list-style-type:none}ul.lst-kix_86hsx13ssqid-4{list-style-type:none}.lst-kix_686a8e4qhxwx-1>li:before{content:"\0025cb "}ul.lst-kix_86hsx13ssqid-1{list-style-type:none}.lst-kix_jkgkf1u9sy0c-8>li:before{content:"\0025a0 "}ul.lst-kix_86hsx13ssqid-0{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-0{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-1{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-2{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-3{list-style-type:none}.lst-kix_b7256qmdgo85-1>li:before{content:"\0025cb "}ul.lst-kix_86hsx13ssqid-7{list-style-type:none}ul.lst-kix_h9mjmxara98n-0{list-style-type:none}ul.lst-kix_86hsx13ssqid-6{list-style-type:none}ul.lst-kix_86hsx13ssqid-8{list-style-type:none}ul.lst-kix_h9mjmxara98n-3{list-style-type:none}.lst-kix_4m04az9jmmj8-8>li:before{content:"\0025a0 "}ul.lst-kix_fy6y7gyjejoh-8{list-style-type:none}ul.lst-kix_h9mjmxara98n-4{list-style-type:none}ul.lst-kix_h9mjmxara98n-1{list-style-type:none}ul.lst-kix_h9mjmxara98n-2{list-style-type:none}ul.lst-kix_h9mjmxara98n-7{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-4{list-style-type:none}ul.lst-kix_h9mjmxara98n-8{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-5{list-style-type:none}ul.lst-kix_h9mjmxara98n-5{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-6{list-style-type:none}ul.lst-kix_h9mjmxara98n-6{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-7{list-style-type:none}.lst-kix_b7256qmdgo85-6>li:before{content:"\0025cf "}ol.lst-kix_1wulu3ra2vwv-1.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-1 0}.lst-kix_q97rvvc7c69e-0>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-1>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-1,upper-latin) ". "}.lst-kix_4m04az9jmmj8-0>li:before{content:"\0025cf "}ol.lst-kix_11a9ub9xa97v-6.start{counter-reset:lst-ctn-kix_11a9ub9xa97v-6 0}.lst-kix_q8ok0mh9yyto-4>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-4,decimal) ") "}.lst-kix_emhp84jkv42c-0>li{counter-increment:lst-ctn-kix_emhp84jkv42c-0}.lst-kix_q8ok0mh9yyto-5>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-5}.lst-kix_s3mi7ukxwiwf-1>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-1}.lst-kix_4m04az9jmmj8-3>li:before{content:"\0025cf "}.lst-kix_xv318blpjdo-7>li{counter-increment:lst-ctn-kix_xv318blpjdo-7}ol.lst-kix_xv318blpjdo-7.start{counter-reset:lst-ctn-kix_xv318blpjdo-7 0}.lst-kix_gfromclascha-1>li:before{content:"\0025cb "}.lst-kix_86hsx13ssqid-7>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-3.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-3 0}.lst-kix_1wulu3ra2vwv-7>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-7}ul.lst-kix_fd1rucpc9vz2-0{list-style-type:none}.lst-kix_fy6y7gyjejoh-0>li:before{content:"\0025cf "}ul.lst-kix_fd1rucpc9vz2-1{list-style-type:none}ul.lst-kix_fd1rucpc9vz2-2{list-style-type:none}ul.lst-kix_fd1rucpc9vz2-3{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-6.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-6 0}ul.lst-kix_fd1rucpc9vz2-8{list-style-type:none}.lst-kix_bfzyeb917dp8-6>li:before{content:"(" counter(lst-ctn-kix_bfzyeb917dp8-6,lower-roman) ") "}.lst-kix_6ril5iwt0fcl-3>li:before{content:"\0025cf "}ul.lst-kix_fd1rucpc9vz2-4{list-style-type:none}ul.lst-kix_fd1rucpc9vz2-5{list-style-type:none}ul.lst-kix_fd1rucpc9vz2-6{list-style-type:none}.lst-kix_fd1rucpc9vz2-6>li:before{content:"\0025cf "}ul.lst-kix_fd1rucpc9vz2-7{list-style-type:none}.lst-kix_5anu1k9tsyak-4>li:before{content:"\0025cb "}.lst-kix_s3mi7ukxwiwf-8>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-8}.lst-kix_iv2x96orjh4l-4>li:before{content:"\0025cb "}.lst-kix_s3mi7ukxwiwf-6>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-6,decimal) ". "}.lst-kix_fy6y7gyjejoh-8>li:before{content:"\0025a0 "}ol.lst-kix_xv318blpjdo-4.start{counter-reset:lst-ctn-kix_xv318blpjdo-4 0}.lst-kix_x1epm4iu41dp-1>li:before{content:"\0025cb "}.lst-kix_q97rvvc7c69e-8>li:before{content:"\0025a0 "}ol.lst-kix_xv318blpjdo-5.start{counter-reset:lst-ctn-kix_xv318blpjdo-5 0}ul.lst-kix_8p26nc4xx5n8-0{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-1{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-2{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-3{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-4{list-style-type:none}.lst-kix_emhp84jkv42c-0>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-0,upper-roman) ". "}.lst-kix_emhp84jkv42c-8>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-8,lower-roman) ") "}.lst-kix_9jxnjym0nges-1>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-5.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-5 0}.lst-kix_ndaonzmgp8vn-3>li:before{content:"\0025cf "}.lst-kix_jj5w63toozfm-8>li:before{content:"" counter(lst-ctn-kix_jj5w63toozfm-8,lower-roman) ". "}.lst-kix_u0uqs69v9qbh-1>li:before{content:"\0025cb "}.lst-kix_opi66v2qdsjs-1>li:before{content:"\0025cb "}.lst-kix_11a9ub9xa97v-7>li:before{content:"(" counter(lst-ctn-kix_11a9ub9xa97v-7,lower-latin) ") "}.lst-kix_8p26nc4xx5n8-5>li:before{content:"\0025a0 "}ol.lst-kix_xv318blpjdo-6.start{counter-reset:lst-ctn-kix_xv318blpjdo-6 0}.lst-kix_1wulu3ra2vwv-0>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-0}.lst-kix_gyhqddkw9i05-5>li:before{content:"\0025a0 "}ul.lst-kix_6qnkx7t1adn9-8{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-4.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-4 0}ul.lst-kix_6qnkx7t1adn9-7{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-5{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-6{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-6{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-5{list-style-type:none}.lst-kix_sholqrhc62dh-3>li:before{content:"\0025cf "}ul.lst-kix_8p26nc4xx5n8-7{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-4{list-style-type:none}ul.lst-kix_8p26nc4xx5n8-8{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-3{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-2{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-1{list-style-type:none}.lst-kix_1wulu3ra2vwv-1>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-1,upper-latin) ". "}ul.lst-kix_6qnkx7t1adn9-0{list-style-type:none}ol{margin:0;padding:0}table td,table th{padding:0}.c30{padding-top:0pt;padding-bottom:0pt;line-height:1.15;orphans:2;widows:2;text-align:left}.c17{color:#666666;font-weight:400;text-decoration:underline;vertical-align:baseline;font-family:"Trebuchet MS";font-style:normal}.c2{padding-top:6pt;padding-bottom:0pt;line-height:1.2;orphans:2;widows:2;text-align:left}.c9{padding-top:24pt;padding-bottom:0pt;line-height:1.3;page-break-after:avoid;text-align:left}.c15{padding-top:8pt;padding-bottom:0pt;line-height:1.2;page-break-after:avoid;text-align:left}.c0{padding-top:0pt;padding-bottom:0pt;line-height:1.2;text-align:left}.c13{color:#008575;font-weight:400;font-size:16pt;font-family:"PT Sans Narrow"}.c3{font-size:9pt;font-family:"Consolas";color:#666600;font-weight:400}.c10{padding-top:0pt;padding-bottom:0pt;line-height:1.0;text-align:left}.c1{font-size:9pt;font-family:"Consolas";color:#000000;font-weight:400}.c8{font-size:9pt;font-family:"Consolas";color:#006666;font-weight:400}.c31{color:#695d46;font-weight:700;font-size:42pt;font-family:"PT Sans Narrow"}.c20{padding-top:16pt;padding-bottom:0pt;line-height:1.0;text-align:left}.c24{font-family:"Open Sans";color:#1155cc;font-weight:400;text-decoration:underline}.c22{color:#ff5e0e;font-weight:700;font-size:18pt;font-family:"PT Sans Narrow"}.c21{color:#000000;font-weight:700;font-size:9pt;font-family:"Consolas"}.c23{font-size:9pt;font-family:"Consolas";color:#660066;font-weight:400}.c25{font-size:9pt;font-family:"Consolas";color:#880000;font-weight:400}.c19{font-size:9pt;font-family:"Consolas";color:#000088;font-weight:400}.c6{background-color:#ffffff;max-width:468pt;padding:72pt 72pt 72pt 72pt}.c26{color:#000000;font-weight:400;font-family:"Arial"}.c11{font-family:"Open Sans";color:#695d46;font-weight:400}.c5{text-decoration:none;vertical-align:baseline;font-style:normal}.c27{color:#000000;font-weight:400;font-family:"Open Sans"}.c18{color:inherit;text-decoration:inherit}.c32{orphans:2;widows:2}.c28{font-weight:400;font-family:"Open Sans"}.c14{padding:0;margin:0}.c7{margin-left:36pt}.c4{margin-left:72pt}.c12{font-size:11pt}.c33{font-size:12pt}.c29{font-size:8pt}.c16{padding-left:0pt}.title{padding-top:0pt;color:#000000;font-size:26pt;padding-bottom:3pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}.subtitle{padding-top:0pt;color:#666666;font-size:15pt;padding-bottom:16pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}li{color:#000000;font-size:11pt;font-family:"Arial"}p{margin:0;color:#000000;font-size:11pt;font-family:"Arial"}h1{padding-top:20pt;color:#000000;font-size:20pt;padding-bottom:6pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h2{padding-top:18pt;color:#000000;font-size:16pt;padding-bottom:6pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h3{padding-top:16pt;color:#434343;font-size:14pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h4{padding-top:14pt;color:#666666;font-size:12pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h5{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h6{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;font-style:italic;orphans:2;widows:2;text-align:left} Oracle Fusion Middleware Deployments Using Docker Swarm Part IIOverviewThis is the second in a series of blogs that describe how to build a Fusion Middleware (FMW) Cluster that runs as a number of Docker images that run in docker containers.  These containers are coordinated using Docker Swarm and can be deployed to a single host machine or multiple hosts.  This simplifies the task of building FMW clusters and also makes it easier to scale them in and out (adding or subtracting host machines) as well as up and down (using bigger or smaller host machines).This blog uses Oracle Service Bus as an FMW product but the principles are applicable to other FMW products.In our previous blog we talked about the overall requirements for running FMW on Docker Swarm and sketched out our overall approach.In this entry we will explain the concepts of using Docker base images and create an Oracle database image and an Oracle Service Bus image.  The next blog will cover how to create an OSB domain and how to run this in Docker Swarm.Notes on Building on Existing Docker ImagesDocker allows you to create a new image by basing it on an existing image.  The “FROM” instruction in a dockerfile tells docker the base image that you are using as your foundation.  We will build a number of database base images that can then be reused later.A docker image is a set of files and configuration.  A docker image runs in a docker container.  So a single image may be running in zero, one or more containers.  Think of the image as the software and configuration, think of the container as the active running image.The diagram shows the images that we will build and their base images referenced in the FROM instruction.Docker FMW Image DependenciesThe solid lines indicate the image reference by the FROM command in the dockerfile used to create an image.  The dotted lines represent dependencies on other images.More details on the images will be given in the next section, but the following bullets explain how they work.  Note that we can store the created images in a company docker registry so that they can be reused.Oracle Linux is an image that we use as our starting point.  This is available from github along with instructions (dockerfiles) to create the other images.Oracle Database image is created by installing the Oracle database binaries into the Oracle Linux image.  This image can be used to create databases with different characteristics, such as different character sets, different service and instance names etc.An Oracle Database Instance is created by running the database configuration assistant in a container created from the Oracle Database image.JDK 8 image is created by installing a JDK 8 into the generic Linux image, this could be reused by any software requiring Java.Fusion Middleware Infrastructure image is created by installing the WebLogic Fusion Middleware Infrastructure into the JDK image.  This provides WebLogic and some additional Oracle infrastructure files that are used by Fusion Middleware.  This image can be reused by any Fusion Middleware product image.Oracle Service Bus image provides us with Service Bus installation on top of the Fusion Middleware Infrastructure image.  This provides a software only installation, no domain is created so the image can be used to create any desired Service Bus domain configuration.Service Bus Domain image is the image where we run the domain configuration utility to create a Service Bus domain.  Prior to running the domain configuration wizard the FMW repository configuration utility is run against the database container created from the Oracle Database Instance image.  Depending on the options chosen this could be pointed to and used by a number of different FMW components.  In our case we will install the SOA schemas. We need to refer to the Oracle Database Container to be able to run the RCU and also to complete the domain creation.  Different run commands are used to start a container to run either the Admin Server or a Managed Server.Retrieving Docker Files for OSB ClusterWe are posting all the required files to go along with this blog on github.  You are welcome to fork from this and improve it.  We cloned many of these files from the official Oracle docker github.  We removed unused versions and added a simplified build.sh file to each product directory to make it easy to see how we actually built our environment.  We are still updating the files online and will shortly add details on building the service bus cluster.  We made the following additional changes to the standard Oracle files:Java - we used the JDK rather than the JRE.WebLogic - we added a 12.2.1.2 FMW Infrastructure alongside the generic and developer options.  This is available in other versions of the Oracle files but not in the 12.2.1.2 files.OSB and SOA - Added OSB and SOA directories along with their associated files.Database CreationBecause Oracle Fusion Middleware relies on a database we need to create a database image.  Once we have created the database then we need to create the FMW schemas within it by running the Repository Creation Utility.  The Repository Creation Utility is only available in the FMW installed code, so to create the schemas we also need to install our FMW component, in our case Service Bus.  The steps we will follow to build, create and configure the database are as follows:Build Oracle Database image using the Oracle provided database dockerfile.Create an Oracle Database Instance by creating a docker container and using the provided scripts to initialize the container with a database instance.Detailed instructions to build the database image are provided in the section “Oracle Database Build Steps”We create the database instance in a docker container rather than baking it into an image because we only want to run a single instance of the database container.FMW InstallationWe will use multiple layered images to get to our FMW install image.  Each image is kept separate to make it flexible and reusable.  For example given the WebLogic FMW Infrastructure image we could create separate FMW images for SOA Suite, WebCenter and Business Intelligence for example.We will install JDK 8 to create a JDK 8 image.  This can be used to create any image requiring a JDK 8 installation.  This if built on top of the Oracle Linux 7 image.We build on top of the JDK 8 image to install Fusion Middleware Infrastructure into a WebLogic FMW Infrastructure image.  This can be used as the basis of any FMW images in the future.In our example we install SOA or OSB on top of the WebLogic FMW Infrastructure image as our FMW component.  This does not create a domain.  It just installs the software.Detailed steps are given in the sections “Weblogic Docker Image Build Steps” and “OSB Docker Image Build steps”General Build InstructionsYou can download our dockerfiles by cloning our github onto your build machine.git clone https://github.com/Shuxuan/OSB-Docker-Swarm.gitThis will create a directory OSB-Docker-Swarm.  We will refer to the location of this as <MASTER> in subsequent instructions.Database Docker Image Build StepsThere is an excellent blog on Creating an Oracle Database Docker image by Gerald Venzi that covers everything.  We have provided a minimal description of the steps here only to be self contained. Get Oracle Database BinariesRetrieve the Oracle 12.1.0.2 binaries from OTN http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.htmllinuxamd64_12102_database_1of2.ziplinuxamd64_12102_database_2of2.zip  and place them at <MASTER>/OracleDatabase/dockerfiles/12.1.0.2.Build Oracle Database Docker ImageCreate a Docker Image for the database using the binaries from the previous step.  This installs the database binaries and saves the result as a docker image.  The build.sh has all the paramters needed by the dockerfile, if you are interested in what they do check out the official Oracle docker database image docs.  Note that if we don’t have the Oracle Linux image then the build process will retrieve it from docker hub.<MASTER>/OracleDatabase/build.shCheck the image has been built properly by making sure it is listed by the docker images command.$ docker imagesREPOSITORY      TAG         IMAGE ID      CREATED         SIZEoracle/database 12.1.0.2-ee 8f0f3c5f4170  49 seconds ago  11.24 GBoraclelinux     latest      27a5201e554e  3 weeks ago     225.1 MBCreate Oracle DatabaseRun a docker container based on the previously built docker image to create the database instance.  The create_db.sh command that we provide will do this.  Again the details of how it works are in the official Oracle docker database image docs.  We also added a -d parameter to run the container in the background../create_db.shCheck container logs docker logs <container_id>Change DB password by running the set_db_password.sh we provide which sets the database password to be FMWDocker.  You can edit the file to change the password.  We also modified this file to set the password policy to not expire the passwords../set_db_password.shWe now have an Oracle database instance running in a docker container.Weblogic Docker Image Build StepsBuild JDK 1.8 Docker image based on base imageGo to <MASTER>/oraclejdk8 folderDownload jdk 8 install binary file, jdk-8uXXX-linux-x64.tar.gz from OTN into oraclejdk8 folder.  Any version that starts jdk-8 should work.Run build.sh./build.shCheck that image has been created$ docker imagesREPOSITORY       TAG          IMAGE ID      CREATED         SIZEoracle/database  12.1.0.2-ee  086a4a3541bc  34 minutes ago  11.24 GBoracle/serverjdk 8            85aebb7b773e  22 hours ago    590.4 MBoraclelinux      latest       27a5201e554e  3 weeks ago     225.1 MBThis JDK image is then used as a base for future images.Build Weblogic 12.2.1.2 installation image based on JDK 1.8 imageGo to folder <MASTER>/wls12.2.1.2Download fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip from OTN to folder wls12.2.1.2Run build.sh./build.shRun docker images to check the image created in the above steps$ docker imagesREPOSITORY        TAG                      IMAGE ID     CREATED              SIZEoracle/weblogic   12.2.1.2-infrastructure  9eceea30c474 About a minute ago   4.096 GBoracle/database   12.1.0.2-ee              086a4a3541bc About an hour ago    11.24 GBoracle/serverjdk  8                        85aebb7b773e 22 hours ago         590.4 MBoraclelinux       latest                   27a5201e554e 3 weeks ago          225.1 MBThis gives us our FMW infrastructure docker image.OSB Docker Image Build stepsGo to folder <MASTER>/osbDownload fmw_12.2.1.2.0_osb.jar from OTN to folder osb.  If unavailable on OTN it is available from e-delivery if you have a Service Bus license.Run build.sh./build.shCheck docker image built in the above step$ docker imagesREPOSITORY        TAG                     IMAGE ID     CREATED             SIZEoracle/osb        12.2.1.2                59021dc16e40 5 minutes ago       7.231 GBoracle/weblogic   12.2.1.2-infrastructure 9eceea30c474 12 minutes ago      4.096 GBoracle/database   12.1.0.2-ee             086a4a3541bc About an hour ago   11.24 GBoracle/serverjdk  8                       85aebb7b773e 23 hours ago        590.4 MBoraclelinux       latest                  27a5201e554e 3 weeks ago         225.1 MBThis gives us our FMW product docker image, in our case service bus.  We also provided SOA Suite dockerfiles in the git repository.SummaryIn this entry we have explained how to install Fusion Middleware and Oracle Database as Docker images, and how to create a Database container.  In our next entry we will use these images to create a cluster in swarm mode by defining swarm services for the database, WebLogic Admin Server and WebLogic managed servers.

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems   Oracle Fusion Middleware Deployments Using Docker Swarm Part II Overview This is the...

Fusion Middleware

Building an FMW Cluster using Docker (Part I Introduction)

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems@import url('https://themes.googleusercontent.com/fonts/css?kit=iMc4-4exjgqA-toZgqqJBapXv4aEsWUJfLHj8vTDOv6sKTypj6oMABt6aLc3fm-oS8WuZCZU7xsrLLn8Eq5Dllll7szA_gjBFRwwth-OR18');.lst-kix_sholqrhc62dh-7>li:before{content:"\0025cb "}.lst-kix_sholqrhc62dh-8>li:before{content:"\0025a0 "}.lst-kix_sholqrhc62dh-6>li:before{content:"\0025cf "}ol.lst-kix_xv318blpjdo-3.start{counter-reset:lst-ctn-kix_xv318blpjdo-3 0}ul.lst-kix_bijol4nzhwf0-8{list-style-type:none}.lst-kix_sholqrhc62dh-1>li:before{content:"\0025cb "}.lst-kix_sholqrhc62dh-0>li:before{content:"\0025cf "}ol.lst-kix_emhp84jkv42c-0{list-style-type:none}.lst-kix_emhp84jkv42c-3>li{counter-increment:lst-ctn-kix_emhp84jkv42c-3}ol.lst-kix_emhp84jkv42c-2{list-style-type:none}ol.lst-kix_emhp84jkv42c-1{list-style-type:none}ol.lst-kix_emhp84jkv42c-4{list-style-type:none}ol.lst-kix_emhp84jkv42c-3{list-style-type:none}ol.lst-kix_emhp84jkv42c-6{list-style-type:none}ol.lst-kix_emhp84jkv42c-5{list-style-type:none}ol.lst-kix_emhp84jkv42c-8{list-style-type:none}ol.lst-kix_emhp84jkv42c-7{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-0{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-1{list-style-type:none}ul.lst-kix_4m04az9jmmj8-3{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-2{list-style-type:none}ul.lst-kix_4m04az9jmmj8-2{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-3{list-style-type:none}ul.lst-kix_4m04az9jmmj8-1{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-4{list-style-type:none}ul.lst-kix_4m04az9jmmj8-0{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-5{list-style-type:none}ul.lst-kix_4m04az9jmmj8-7{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-6{list-style-type:none}ul.lst-kix_4m04az9jmmj8-6{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-7{list-style-type:none}ul.lst-kix_4m04az9jmmj8-5{list-style-type:none}ul.lst-kix_j42a5dwgnqyq-8{list-style-type:none}ul.lst-kix_4m04az9jmmj8-4{list-style-type:none}ul.lst-kix_4m04az9jmmj8-8{list-style-type:none}ul.lst-kix_bijol4nzhwf0-4{list-style-type:none}.lst-kix_s3mi7ukxwiwf-2>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-2}ul.lst-kix_bijol4nzhwf0-5{list-style-type:none}ul.lst-kix_bijol4nzhwf0-6{list-style-type:none}ul.lst-kix_bijol4nzhwf0-7{list-style-type:none}ul.lst-kix_bijol4nzhwf0-0{list-style-type:none}ul.lst-kix_bijol4nzhwf0-1{list-style-type:none}ul.lst-kix_bijol4nzhwf0-2{list-style-type:none}ul.lst-kix_bijol4nzhwf0-3{list-style-type:none}.lst-kix_1wulu3ra2vwv-2>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-2}.lst-kix_q8ok0mh9yyto-4>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-4}.lst-kix_bijol4nzhwf0-7>li:before{content:"\0025cb "}.lst-kix_bijol4nzhwf0-6>li:before{content:"\0025cf "}.lst-kix_bijol4nzhwf0-8>li:before{content:"\0025a0 "}.lst-kix_bijol4nzhwf0-5>li:before{content:"\0025a0 "}.lst-kix_bijol4nzhwf0-3>li:before{content:"\0025cf "}.lst-kix_bijol4nzhwf0-2>li:before{content:"\0025a0 "}.lst-kix_bijol4nzhwf0-4>li:before{content:"\0025cb "}ol.lst-kix_emhp84jkv42c-4.start{counter-reset:lst-ctn-kix_emhp84jkv42c-4 0}ol.lst-kix_xv318blpjdo-8.start{counter-reset:lst-ctn-kix_xv318blpjdo-8 0}.lst-kix_q8ok0mh9yyto-8>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-8}ol.lst-kix_q8ok0mh9yyto-1.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-1 0}ol.lst-kix_s3mi7ukxwiwf-8.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-8 0}ol.lst-kix_s3mi7ukxwiwf-1.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-1 0}.lst-kix_bijol4nzhwf0-0>li:before{content:"\0025cf "}.lst-kix_bijol4nzhwf0-1>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-1>li{counter-increment:lst-ctn-kix_xv318blpjdo-1}ul.lst-kix_gfromclascha-0{list-style-type:none}ol.lst-kix_emhp84jkv42c-2.start{counter-reset:lst-ctn-kix_emhp84jkv42c-2 0}.lst-kix_s3mi7ukxwiwf-1>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-1,lower-latin) ". "}.lst-kix_gfromclascha-0>li:before{content:"\0025cf "}.lst-kix_gfromclascha-2>li:before{content:"\0025a0 "}.lst-kix_gfromclascha-4>li:before{content:"\0025cb "}ul.lst-kix_gfromclascha-7{list-style-type:none}ul.lst-kix_gfromclascha-8{list-style-type:none}ul.lst-kix_gfromclascha-5{list-style-type:none}.lst-kix_gfromclascha-6>li:before{content:"\0025cf "}ul.lst-kix_gfromclascha-6{list-style-type:none}ul.lst-kix_gfromclascha-3{list-style-type:none}ul.lst-kix_gfromclascha-4{list-style-type:none}ul.lst-kix_gfromclascha-1{list-style-type:none}ul.lst-kix_gfromclascha-2{list-style-type:none}ul.lst-kix_opi66v2qdsjs-3{list-style-type:none}ul.lst-kix_opi66v2qdsjs-2{list-style-type:none}ul.lst-kix_opi66v2qdsjs-1{list-style-type:none}ul.lst-kix_opi66v2qdsjs-0{list-style-type:none}.lst-kix_q97rvvc7c69e-5>li:before{content:"\0025a0 "}.lst-kix_q97rvvc7c69e-7>li:before{content:"\0025cb "}.lst-kix_q8ok0mh9yyto-0>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-0,upper-roman) ". "}ol.lst-kix_q8ok0mh9yyto-8.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-8 0}.lst-kix_fy6y7gyjejoh-3>li:before{content:"\0025cf "}ul.lst-kix_opi66v2qdsjs-8{list-style-type:none}.lst-kix_q97rvvc7c69e-3>li:before{content:"\0025cf "}ul.lst-kix_opi66v2qdsjs-7{list-style-type:none}ul.lst-kix_opi66v2qdsjs-6{list-style-type:none}ul.lst-kix_opi66v2qdsjs-5{list-style-type:none}.lst-kix_fy6y7gyjejoh-1>li:before{content:"\0025cb "}ul.lst-kix_opi66v2qdsjs-4{list-style-type:none}.lst-kix_s3mi7ukxwiwf-7>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-7,lower-latin) ". "}.lst-kix_xv318blpjdo-8>li{counter-increment:lst-ctn-kix_xv318blpjdo-8}.lst-kix_emhp84jkv42c-7>li{counter-increment:lst-ctn-kix_emhp84jkv42c-7}.lst-kix_s3mi7ukxwiwf-5>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-5,lower-roman) ". "}.lst-kix_fy6y7gyjejoh-7>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-1.start{counter-reset:lst-ctn-kix_xv318blpjdo-1 0}.lst-kix_fy6y7gyjejoh-5>li:before{content:"\0025a0 "}.lst-kix_s3mi7ukxwiwf-3>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-3,decimal) ". "}.lst-kix_emhp84jkv42c-3>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-3,lower-latin) ") "}.lst-kix_emhp84jkv42c-5>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-5,lower-latin) ") "}.lst-kix_emhp84jkv42c-1>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-1,upper-latin) ". "}.lst-kix_emhp84jkv42c-7>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-7,lower-latin) ") "}.lst-kix_1wulu3ra2vwv-4>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-4,decimal) ") "}.lst-kix_1wulu3ra2vwv-6>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-6,lower-roman) ") "}.lst-kix_s3mi7ukxwiwf-6>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-6}.lst-kix_1wulu3ra2vwv-8>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-8,lower-roman) ") "}ol.lst-kix_1wulu3ra2vwv-4.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-4 0}.lst-kix_gfromclascha-8>li:before{content:"\0025a0 "}ol.lst-kix_s3mi7ukxwiwf-1{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-2{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-0{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-1{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-5{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-2{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-6{list-style-type:none}.lst-kix_sholqrhc62dh-2>li:before{content:"\0025a0 "}ul.lst-kix_9jxnjym0nges-0{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-3{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-0{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-4{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-5{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-6{list-style-type:none}.lst-kix_sholqrhc62dh-4>li:before{content:"\0025cb "}.lst-kix_1wulu3ra2vwv-2>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-2,decimal) ". "}ul.lst-kix_7tib3jrzu2u9-3{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-7{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-4{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-8{list-style-type:none}.lst-kix_1wulu3ra2vwv-0>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-0,upper-roman) ". "}ul.lst-kix_7tib3jrzu2u9-7{list-style-type:none}ul.lst-kix_7tib3jrzu2u9-8{list-style-type:none}.lst-kix_emhp84jkv42c-2>li{counter-increment:lst-ctn-kix_emhp84jkv42c-2}ol.lst-kix_s3mi7ukxwiwf-4.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-4 0}.lst-kix_xv318blpjdo-1>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-1,upper-latin) ". "}.lst-kix_xv318blpjdo-0>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-0,upper-roman) ". "}ol.lst-kix_1wulu3ra2vwv-7.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-7 0}.lst-kix_1wulu3ra2vwv-1>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-1}.lst-kix_jkgkf1u9sy0c-2>li:before{content:"\0025a0 "}.lst-kix_jkgkf1u9sy0c-1>li:before{content:"\0025cb "}.lst-kix_opi66v2qdsjs-0>li:before{content:"\0025cf "}.lst-kix_686a8e4qhxwx-7>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-8>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-8,lower-roman) ") "}.lst-kix_xv318blpjdo-5>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-5,lower-latin) ") "}.lst-kix_jkgkf1u9sy0c-5>li:before{content:"\0025a0 "}.lst-kix_686a8e4qhxwx-3>li:before{content:"\0025cf "}.lst-kix_jkgkf1u9sy0c-6>li:before{content:"\0025cf "}.lst-kix_686a8e4qhxwx-4>li:before{content:"\0025cb "}.lst-kix_emhp84jkv42c-4>li{counter-increment:lst-ctn-kix_emhp84jkv42c-4}.lst-kix_xv318blpjdo-4>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-4,decimal) ") "}.lst-kix_s3mi7ukxwiwf-7>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-7}.lst-kix_q97rvvc7c69e-2>li:before{content:"\0025a0 "}.lst-kix_q8ok0mh9yyto-3>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-3,lower-latin) ") "}.lst-kix_686a8e4qhxwx-8>li:before{content:"\0025a0 "}.lst-kix_emhp84jkv42c-6>li{counter-increment:lst-ctn-kix_emhp84jkv42c-6}.lst-kix_q8ok0mh9yyto-2>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-2,decimal) ". "}ol.lst-kix_1wulu3ra2vwv-6.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-6 0}.lst-kix_q8ok0mh9yyto-6>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-6,lower-roman) ") "}.lst-kix_4m04az9jmmj8-2>li:before{content:"\0025a0 "}.lst-kix_4m04az9jmmj8-6>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-7>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-7,lower-latin) ") "}.lst-kix_4m04az9jmmj8-1>li:before{content:"\0025cb "}.lst-kix_4m04az9jmmj8-5>li:before{content:"\0025a0 "}.lst-kix_xv318blpjdo-4>li{counter-increment:lst-ctn-kix_xv318blpjdo-4}.lst-kix_s3mi7ukxwiwf-0>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-0,decimal) ". "}ul.lst-kix_q97rvvc7c69e-4{list-style-type:none}ul.lst-kix_q97rvvc7c69e-3{list-style-type:none}ul.lst-kix_q97rvvc7c69e-6{list-style-type:none}.lst-kix_s3mi7ukxwiwf-8>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-8,lower-roman) ". "}ul.lst-kix_q97rvvc7c69e-5{list-style-type:none}ul.lst-kix_q97rvvc7c69e-8{list-style-type:none}ul.lst-kix_q97rvvc7c69e-7{list-style-type:none}.lst-kix_6qnkx7t1adn9-4>li:before{content:"\0025cb "}.lst-kix_6qnkx7t1adn9-8>li:before{content:"\0025a0 "}.lst-kix_gfromclascha-3>li:before{content:"\0025cf "}.lst-kix_xv318blpjdo-6>li{counter-increment:lst-ctn-kix_xv318blpjdo-6}ul.lst-kix_q97rvvc7c69e-0{list-style-type:none}ul.lst-kix_q97rvvc7c69e-2{list-style-type:none}ul.lst-kix_q97rvvc7c69e-1{list-style-type:none}.lst-kix_q97rvvc7c69e-6>li:before{content:"\0025cf "}ol.lst-kix_s3mi7ukxwiwf-3.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-3 0}.lst-kix_fy6y7gyjejoh-2>li:before{content:"\0025a0 "}.lst-kix_fy6y7gyjejoh-6>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-1>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-1}ol.lst-kix_1wulu3ra2vwv-8.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-8 0}.lst-kix_s3mi7ukxwiwf-4>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-4,lower-latin) ". "}.lst-kix_5anu1k9tsyak-2>li:before{content:"\0025a0 "}ol.lst-kix_s3mi7ukxwiwf-6.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-6 0}.lst-kix_emhp84jkv42c-2>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-2,decimal) ". "}.lst-kix_emhp84jkv42c-6>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-6,lower-roman) ") "}.lst-kix_686a8e4qhxwx-0>li:before{content:"\0025cf "}.lst-kix_1wulu3ra2vwv-3>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-3,lower-latin) ") "}.lst-kix_9jxnjym0nges-3>li:before{content:"\0025cf "}.lst-kix_opi66v2qdsjs-7>li:before{content:"\0025cb "}.lst-kix_1wulu3ra2vwv-7>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-7,lower-latin) ") "}ol.lst-kix_s3mi7ukxwiwf-5.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-5 0}.lst-kix_opi66v2qdsjs-3>li:before{content:"\0025cf "}.lst-kix_1wulu3ra2vwv-6>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-6}.lst-kix_gfromclascha-7>li:before{content:"\0025cb "}.lst-kix_86hsx13ssqid-5>li:before{content:"\0025a0 "}.lst-kix_9jxnjym0nges-7>li:before{content:"\0025cb "}.lst-kix_86hsx13ssqid-1>li:before{content:"\0025cb "}.lst-kix_sholqrhc62dh-5>li:before{content:"\0025a0 "}.lst-kix_7tib3jrzu2u9-0>li:before{content:"\0025cf "}.lst-kix_s3mi7ukxwiwf-4>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-4}.lst-kix_7tib3jrzu2u9-4>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-4{list-style-type:none}ol.lst-kix_s3mi7ukxwiwf-7.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-7 0}ol.lst-kix_xv318blpjdo-5{list-style-type:none}ol.lst-kix_xv318blpjdo-6{list-style-type:none}ol.lst-kix_xv318blpjdo-7{list-style-type:none}ol.lst-kix_xv318blpjdo-0{list-style-type:none}ol.lst-kix_xv318blpjdo-1{list-style-type:none}.lst-kix_7tib3jrzu2u9-1>li:before{content:"\0025cb "}.lst-kix_7tib3jrzu2u9-5>li:before{content:"\0025a0 "}ol.lst-kix_xv318blpjdo-2{list-style-type:none}ol.lst-kix_xv318blpjdo-3{list-style-type:none}.lst-kix_q8ok0mh9yyto-6>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-6}.lst-kix_7tib3jrzu2u9-2>li:before{content:"\0025a0 "}.lst-kix_7tib3jrzu2u9-3>li:before{content:"\0025cf "}ul.lst-kix_5anu1k9tsyak-7{list-style-type:none}ul.lst-kix_5anu1k9tsyak-8{list-style-type:none}ul.lst-kix_5anu1k9tsyak-5{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-0{list-style-type:none}ul.lst-kix_5anu1k9tsyak-6{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-1{list-style-type:none}ul.lst-kix_5anu1k9tsyak-3{list-style-type:none}ul.lst-kix_5anu1k9tsyak-4{list-style-type:none}ul.lst-kix_5anu1k9tsyak-1{list-style-type:none}.lst-kix_xv318blpjdo-3>li{counter-increment:lst-ctn-kix_xv318blpjdo-3}ul.lst-kix_5anu1k9tsyak-2{list-style-type:none}.lst-kix_7tib3jrzu2u9-8>li:before{content:"\0025a0 "}ul.lst-kix_sholqrhc62dh-6{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-6{list-style-type:none}ul.lst-kix_5anu1k9tsyak-0{list-style-type:none}ul.lst-kix_sholqrhc62dh-5{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-7{list-style-type:none}ul.lst-kix_sholqrhc62dh-4{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-8{list-style-type:none}ul.lst-kix_sholqrhc62dh-3{list-style-type:none}.lst-kix_7tib3jrzu2u9-6>li:before{content:"\0025cf "}ol.lst-kix_xv318blpjdo-8{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-2{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-3{list-style-type:none}.lst-kix_7tib3jrzu2u9-7>li:before{content:"\0025cb "}ul.lst-kix_sholqrhc62dh-8{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-4{list-style-type:none}ul.lst-kix_sholqrhc62dh-7{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-5{list-style-type:none}ul.lst-kix_sholqrhc62dh-2{list-style-type:none}ul.lst-kix_sholqrhc62dh-1{list-style-type:none}ul.lst-kix_sholqrhc62dh-0{list-style-type:none}.lst-kix_emhp84jkv42c-1>li{counter-increment:lst-ctn-kix_emhp84jkv42c-1}ol.lst-kix_1wulu3ra2vwv-5.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-5 0}.lst-kix_j42a5dwgnqyq-8>li:before{content:"\0025a0 "}ol.lst-kix_q8ok0mh9yyto-2.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-2 0}.lst-kix_5anu1k9tsyak-7>li:before{content:"\0025cb "}.lst-kix_j42a5dwgnqyq-6>li:before{content:"\0025cf "}.lst-kix_5anu1k9tsyak-6>li:before{content:"\0025cf "}.lst-kix_j42a5dwgnqyq-7>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-2.start{counter-reset:lst-ctn-kix_xv318blpjdo-2 0}.lst-kix_1wulu3ra2vwv-4>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-4}.lst-kix_5anu1k9tsyak-8>li:before{content:"\0025a0 "}ol.lst-kix_s3mi7ukxwiwf-2.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-2 0}ul.lst-kix_686a8e4qhxwx-4{list-style-type:none}ul.lst-kix_686a8e4qhxwx-3{list-style-type:none}ul.lst-kix_686a8e4qhxwx-2{list-style-type:none}.lst-kix_q8ok0mh9yyto-2>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-2}ul.lst-kix_686a8e4qhxwx-1{list-style-type:none}ul.lst-kix_686a8e4qhxwx-8{list-style-type:none}ul.lst-kix_686a8e4qhxwx-7{list-style-type:none}ul.lst-kix_686a8e4qhxwx-6{list-style-type:none}.lst-kix_6qnkx7t1adn9-0>li:before{content:"\0025cf "}ul.lst-kix_686a8e4qhxwx-5{list-style-type:none}.lst-kix_6qnkx7t1adn9-1>li:before{content:"\0025cb "}.lst-kix_6qnkx7t1adn9-3>li:before{content:"\0025cf "}ul.lst-kix_686a8e4qhxwx-0{list-style-type:none}.lst-kix_6qnkx7t1adn9-2>li:before{content:"\0025a0 "}.lst-kix_j42a5dwgnqyq-0>li:before{content:"\0025cf "}.lst-kix_j42a5dwgnqyq-2>li:before{content:"\0025a0 "}.lst-kix_j42a5dwgnqyq-1>li:before{content:"\0025cb "}.lst-kix_j42a5dwgnqyq-5>li:before{content:"\0025a0 "}.lst-kix_j42a5dwgnqyq-4>li:before{content:"\0025cb "}ol.lst-kix_1wulu3ra2vwv-0.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-0 0}ol.lst-kix_q8ok0mh9yyto-7.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-7 0}ol.lst-kix_emhp84jkv42c-3.start{counter-reset:lst-ctn-kix_emhp84jkv42c-3 0}.lst-kix_j42a5dwgnqyq-3>li:before{content:"\0025cf "}.lst-kix_emhp84jkv42c-8>li{counter-increment:lst-ctn-kix_emhp84jkv42c-8}ol.lst-kix_1wulu3ra2vwv-3.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-3 0}ol.lst-kix_q8ok0mh9yyto-0.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-0 0}.lst-kix_6qnkx7t1adn9-5>li:before{content:"\0025a0 "}.lst-kix_6qnkx7t1adn9-7>li:before{content:"\0025cb "}ol.lst-kix_s3mi7ukxwiwf-0.start{counter-reset:lst-ctn-kix_s3mi7ukxwiwf-0 0}.lst-kix_86hsx13ssqid-8>li:before{content:"\0025a0 "}.lst-kix_s3mi7ukxwiwf-0>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-0}.lst-kix_5anu1k9tsyak-3>li:before{content:"\0025cf "}.lst-kix_5anu1k9tsyak-5>li:before{content:"\0025a0 "}.lst-kix_1wulu3ra2vwv-8>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-8}.lst-kix_5anu1k9tsyak-1>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-6{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-5{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-4{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-3{list-style-type:none}.lst-kix_9jxnjym0nges-0>li:before{content:"\0025cf "}ol.lst-kix_q8ok0mh9yyto-8{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-7{list-style-type:none}ol.lst-kix_emhp84jkv42c-0.start{counter-reset:lst-ctn-kix_emhp84jkv42c-0 0}.lst-kix_opi66v2qdsjs-8>li:before{content:"\0025a0 "}ol.lst-kix_q8ok0mh9yyto-2{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-1{list-style-type:none}.lst-kix_9jxnjym0nges-2>li:before{content:"\0025a0 "}ol.lst-kix_q8ok0mh9yyto-0{list-style-type:none}.lst-kix_opi66v2qdsjs-4>li:before{content:"\0025cb "}.lst-kix_opi66v2qdsjs-2>li:before{content:"\0025a0 "}.lst-kix_opi66v2qdsjs-6>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-3>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-3}.lst-kix_1wulu3ra2vwv-3>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-3}.lst-kix_86hsx13ssqid-0>li:before{content:"\0025cf "}.lst-kix_86hsx13ssqid-6>li:before{content:"\0025cf "}.lst-kix_9jxnjym0nges-4>li:before{content:"\0025cb "}.lst-kix_9jxnjym0nges-8>li:before{content:"\0025a0 "}.lst-kix_86hsx13ssqid-4>li:before{content:"\0025cb "}.lst-kix_9jxnjym0nges-6>li:before{content:"\0025cf "}.lst-kix_86hsx13ssqid-2>li:before{content:"\0025a0 "}ul.lst-kix_9jxnjym0nges-2{list-style-type:none}.lst-kix_xv318blpjdo-2>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-2,decimal) ". "}ul.lst-kix_9jxnjym0nges-1{list-style-type:none}ul.lst-kix_9jxnjym0nges-4{list-style-type:none}ul.lst-kix_9jxnjym0nges-3{list-style-type:none}ul.lst-kix_9jxnjym0nges-6{list-style-type:none}ul.lst-kix_9jxnjym0nges-5{list-style-type:none}ul.lst-kix_9jxnjym0nges-8{list-style-type:none}ul.lst-kix_9jxnjym0nges-7{list-style-type:none}.lst-kix_xv318blpjdo-2>li{counter-increment:lst-ctn-kix_xv318blpjdo-2}.lst-kix_s3mi7ukxwiwf-3>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-3}ol.lst-kix_xv318blpjdo-0.start{counter-reset:lst-ctn-kix_xv318blpjdo-0 0}.lst-kix_jkgkf1u9sy0c-0>li:before{content:"\0025cf "}.lst-kix_jkgkf1u9sy0c-3>li:before{content:"\0025cf "}.lst-kix_xv318blpjdo-0>li{counter-increment:lst-ctn-kix_xv318blpjdo-0}ol.lst-kix_emhp84jkv42c-1.start{counter-reset:lst-ctn-kix_emhp84jkv42c-1 0}.lst-kix_jkgkf1u9sy0c-4>li:before{content:"\0025cb "}ol.lst-kix_1wulu3ra2vwv-2.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-2 0}.lst-kix_686a8e4qhxwx-6>li:before{content:"\0025cf "}.lst-kix_s3mi7ukxwiwf-5>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-5}.lst-kix_jkgkf1u9sy0c-7>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-7>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-7,lower-latin) ") "}.lst-kix_686a8e4qhxwx-5>li:before{content:"\0025a0 "}.lst-kix_xv318blpjdo-6>li:before{content:"(" counter(lst-ctn-kix_xv318blpjdo-6,lower-roman) ") "}.lst-kix_q8ok0mh9yyto-7>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-7}ul.lst-kix_86hsx13ssqid-3{list-style-type:none}ul.lst-kix_86hsx13ssqid-2{list-style-type:none}.lst-kix_xv318blpjdo-3>li:before{content:"" counter(lst-ctn-kix_xv318blpjdo-3,lower-latin) ") "}ul.lst-kix_86hsx13ssqid-5{list-style-type:none}ul.lst-kix_86hsx13ssqid-4{list-style-type:none}.lst-kix_686a8e4qhxwx-1>li:before{content:"\0025cb "}ul.lst-kix_86hsx13ssqid-1{list-style-type:none}.lst-kix_jkgkf1u9sy0c-8>li:before{content:"\0025a0 "}ul.lst-kix_86hsx13ssqid-0{list-style-type:none}.lst-kix_686a8e4qhxwx-2>li:before{content:"\0025a0 "}ul.lst-kix_fy6y7gyjejoh-0{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-1{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-2{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-3{list-style-type:none}ul.lst-kix_86hsx13ssqid-7{list-style-type:none}ul.lst-kix_86hsx13ssqid-6{list-style-type:none}ul.lst-kix_86hsx13ssqid-8{list-style-type:none}.lst-kix_4m04az9jmmj8-8>li:before{content:"\0025a0 "}ul.lst-kix_fy6y7gyjejoh-8{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-4{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-5{list-style-type:none}.lst-kix_4m04az9jmmj8-7>li:before{content:"\0025cb "}ul.lst-kix_fy6y7gyjejoh-6{list-style-type:none}ul.lst-kix_fy6y7gyjejoh-7{list-style-type:none}ol.lst-kix_1wulu3ra2vwv-1.start{counter-reset:lst-ctn-kix_1wulu3ra2vwv-1 0}.lst-kix_q97rvvc7c69e-1>li:before{content:"\0025cb "}.lst-kix_q97rvvc7c69e-0>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-1>li:before{content:"" counter(lst-ctn-kix_q8ok0mh9yyto-1,upper-latin) ". "}.lst-kix_4m04az9jmmj8-0>li:before{content:"\0025cf "}.lst-kix_q8ok0mh9yyto-4>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-4,decimal) ") "}.lst-kix_emhp84jkv42c-0>li{counter-increment:lst-ctn-kix_emhp84jkv42c-0}.lst-kix_q8ok0mh9yyto-5>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-5,lower-latin) ") "}.lst-kix_q8ok0mh9yyto-5>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-5}.lst-kix_4m04az9jmmj8-4>li:before{content:"\0025cb "}.lst-kix_s3mi7ukxwiwf-1>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-1}.lst-kix_q8ok0mh9yyto-8>li:before{content:"(" counter(lst-ctn-kix_q8ok0mh9yyto-8,lower-roman) ") "}.lst-kix_4m04az9jmmj8-3>li:before{content:"\0025cf "}.lst-kix_xv318blpjdo-7>li{counter-increment:lst-ctn-kix_xv318blpjdo-7}ol.lst-kix_emhp84jkv42c-5.start{counter-reset:lst-ctn-kix_emhp84jkv42c-5 0}ol.lst-kix_xv318blpjdo-7.start{counter-reset:lst-ctn-kix_xv318blpjdo-7 0}.lst-kix_gfromclascha-1>li:before{content:"\0025cb "}.lst-kix_86hsx13ssqid-7>li:before{content:"\0025cb "}.lst-kix_q8ok0mh9yyto-0>li{counter-increment:lst-ctn-kix_q8ok0mh9yyto-0}ol.lst-kix_q8ok0mh9yyto-3.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-3 0}.lst-kix_1wulu3ra2vwv-7>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-7}.lst-kix_6qnkx7t1adn9-6>li:before{content:"\0025cf "}.lst-kix_gfromclascha-5>li:before{content:"\0025a0 "}.lst-kix_fy6y7gyjejoh-0>li:before{content:"\0025cf "}ol.lst-kix_emhp84jkv42c-8.start{counter-reset:lst-ctn-kix_emhp84jkv42c-8 0}ol.lst-kix_q8ok0mh9yyto-6.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-6 0}.lst-kix_q97rvvc7c69e-4>li:before{content:"\0025cb "}.lst-kix_xv318blpjdo-5>li{counter-increment:lst-ctn-kix_xv318blpjdo-5}.lst-kix_5anu1k9tsyak-4>li:before{content:"\0025cb "}.lst-kix_s3mi7ukxwiwf-8>li{counter-increment:lst-ctn-kix_s3mi7ukxwiwf-8}.lst-kix_s3mi7ukxwiwf-6>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-6,decimal) ". "}.lst-kix_fy6y7gyjejoh-8>li:before{content:"\0025a0 "}.lst-kix_1wulu3ra2vwv-5>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-5}.lst-kix_5anu1k9tsyak-0>li:before{content:"\0025cf "}.lst-kix_fy6y7gyjejoh-4>li:before{content:"\0025cb "}ol.lst-kix_xv318blpjdo-4.start{counter-reset:lst-ctn-kix_xv318blpjdo-4 0}.lst-kix_s3mi7ukxwiwf-2>li:before{content:"" counter(lst-ctn-kix_s3mi7ukxwiwf-2,lower-roman) ". "}.lst-kix_q97rvvc7c69e-8>li:before{content:"\0025a0 "}.lst-kix_emhp84jkv42c-4>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-4,decimal) ") "}.lst-kix_emhp84jkv42c-5>li{counter-increment:lst-ctn-kix_emhp84jkv42c-5}ol.lst-kix_xv318blpjdo-5.start{counter-reset:lst-ctn-kix_xv318blpjdo-5 0}.lst-kix_emhp84jkv42c-0>li:before{content:"" counter(lst-ctn-kix_emhp84jkv42c-0,upper-roman) ". "}.lst-kix_emhp84jkv42c-8>li:before{content:"(" counter(lst-ctn-kix_emhp84jkv42c-8,lower-roman) ") "}.lst-kix_9jxnjym0nges-1>li:before{content:"\0025cb "}ol.lst-kix_q8ok0mh9yyto-5.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-5 0}.lst-kix_1wulu3ra2vwv-5>li:before{content:"(" counter(lst-ctn-kix_1wulu3ra2vwv-5,lower-latin) ") "}.lst-kix_opi66v2qdsjs-1>li:before{content:"\0025cb "}.lst-kix_opi66v2qdsjs-5>li:before{content:"\0025a0 "}ol.lst-kix_emhp84jkv42c-7.start{counter-reset:lst-ctn-kix_emhp84jkv42c-7 0}ul.lst-kix_jkgkf1u9sy0c-4{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-5{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-6{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-7{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-8{list-style-type:none}ol.lst-kix_xv318blpjdo-6.start{counter-reset:lst-ctn-kix_xv318blpjdo-6 0}.lst-kix_1wulu3ra2vwv-0>li{counter-increment:lst-ctn-kix_1wulu3ra2vwv-0}ol.lst-kix_emhp84jkv42c-6.start{counter-reset:lst-ctn-kix_emhp84jkv42c-6 0}.lst-kix_9jxnjym0nges-5>li:before{content:"\0025a0 "}ul.lst-kix_6qnkx7t1adn9-8{list-style-type:none}ol.lst-kix_q8ok0mh9yyto-4.start{counter-reset:lst-ctn-kix_q8ok0mh9yyto-4 0}ul.lst-kix_6qnkx7t1adn9-7{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-6{list-style-type:none}.lst-kix_86hsx13ssqid-3>li:before{content:"\0025cf "}ul.lst-kix_6qnkx7t1adn9-5{list-style-type:none}.lst-kix_sholqrhc62dh-3>li:before{content:"\0025cf "}ul.lst-kix_6qnkx7t1adn9-4{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-3{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-0{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-2{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-1{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-1{list-style-type:none}.lst-kix_1wulu3ra2vwv-1>li:before{content:"" counter(lst-ctn-kix_1wulu3ra2vwv-1,upper-latin) ". "}ul.lst-kix_jkgkf1u9sy0c-2{list-style-type:none}ul.lst-kix_6qnkx7t1adn9-0{list-style-type:none}ul.lst-kix_jkgkf1u9sy0c-3{list-style-type:none}ol{margin:0;padding:0}table td,table th{padding:0}.c7{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:468pt;border-top-color:#000000;border-bottom-style:solid}.c18{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:105.8pt;border-top-color:#000000;border-bottom-style:solid}.c6{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:92.2pt;border-top-color:#000000;border-bottom-style:solid}.c24{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:270pt;border-top-color:#000000;border-bottom-style:solid}.c33{color:#695d46;font-weight:400;text-decoration:none;vertical-align:baseline;font-size:14pt;font-family:"PT Sans Narrow";font-style:normal}.c34{color:#000000;font-weight:400;text-decoration:none;vertical-align:baseline;font-size:11pt;font-family:"Arial";font-style:normal}.c17{color:#666666;font-weight:400;text-decoration:underline;vertical-align:baseline;font-size:11pt;font-family:"Trebuchet MS";font-style:normal}.c2{color:#695d46;font-weight:400;text-decoration:none;vertical-align:baseline;font-size:11pt;font-family:"Open Sans";font-style:normal}.c21{color:#ff5e0e;font-weight:700;text-decoration:none;vertical-align:baseline;font-size:18pt;font-family:"PT Sans Narrow";font-style:normal}.c16{color:#695d46;font-weight:700;text-decoration:none;vertical-align:baseline;font-size:11pt;font-family:"Open Sans";font-style:normal}.c32{color:#695d46;font-weight:700;text-decoration:none;vertical-align:baseline;font-size:42pt;font-family:"PT Sans Narrow";font-style:normal}.c4{padding-top:6pt;padding-bottom:0pt;line-height:1.2;orphans:2;widows:2;text-align:left}.c30{padding-top:8pt;padding-bottom:0pt;line-height:1.2;page-break-after:avoid;text-align:left}.c19{padding-top:24pt;padding-bottom:0pt;line-height:1.3;page-break-after:avoid;text-align:left}.c27{text-decoration:none;vertical-align:baseline;font-size:12pt;font-style:normal}.c20{padding-top:16pt;padding-bottom:0pt;line-height:1.0;text-align:left}.c22{margin-left:36pt;border-spacing:0;border-collapse:collapse;margin-right:auto}.c1{padding-top:0pt;padding-bottom:0pt;line-height:1.0;text-align:left}.c25{font-size:18pt;font-family:"PT Sans Narrow";color:#ff5e0e;font-weight:700}.c35{border-spacing:0;border-collapse:collapse;margin-right:auto}.c11{font-family:"Open Sans";color:#1155cc;font-weight:400;text-decoration:underline}.c0{margin-left:72pt;border-spacing:0;border-collapse:collapse;margin-right:auto}.c23{text-decoration:none;vertical-align:baseline;font-size:16pt;font-style:normal}.c28{padding-top:0pt;padding-bottom:0pt;line-height:1.2;text-align:left}.c14{font-family:"PT Sans Narrow";color:#008575;font-weight:400}.c9{font-family:"Open Sans";color:#695d46;font-weight:400}.c31{font-family:"Open Sans";color:#695d46;font-weight:700}.c29{orphans:2;widows:2}.c15{max-width:468pt;padding:72pt 72pt 72pt 72pt}.c10{margin-left:36pt;padding-left:0pt}.c13{padding:0;margin:0}.c3{color:inherit;text-decoration:inherit}.c12{margin-left:72pt;padding-left:0pt}.c5{background-color:#ffffff}.c8{height:0pt}.c26{height:11pt}.title{padding-top:0pt;color:#000000;font-size:26pt;padding-bottom:3pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}.subtitle{padding-top:0pt;color:#666666;font-size:15pt;padding-bottom:16pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}li{color:#000000;font-size:11pt;font-family:"Arial"}p{margin:0;color:#000000;font-size:11pt;font-family:"Arial"}h1{padding-top:20pt;color:#000000;font-size:20pt;padding-bottom:6pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h2{padding-top:18pt;color:#000000;font-size:16pt;padding-bottom:6pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h3{padding-top:16pt;color:#434343;font-size:14pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h4{padding-top:14pt;color:#666666;font-size:12pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h5{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h6{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:"Arial";line-height:1.15;page-break-after:avoid;font-style:italic;orphans:2;widows:2;text-align:left} Oracle Fusion Middleware Deployments Using Docker Swarm Part IOverviewThis is the first in a series of blogs that describe how to build a Fusion Middleware (FMW) Cluster that runs as a number of Docker images that run in docker containers.  These containers are coordinated using Docker Swarm and can be deployed to a single host machine or multiple hosts.  This simplifies the task of building FMW clusters and also makes it easier to scale them in and out (adding or subtracting host machines) as well as up and down (using bigger or smaller host machines).This blog uses Oracle Service Bus as an FMW product but the principles are applicable to other FMW products.In this entry we will explain the concepts of using Docker Swarm 1.12 to create a Fusion Middleware Cluster.  We will explain how to set up a minimal Linux machine to run a docker engine and prepare it to start building out a Fusion Middleware Cluster running on Swarm.  In future entries we will cover creating a database docker image that we will use to support Fusion Middleware.  Other entries will explain how to create a docker image to run Fusion Middleware and create a cluster in swarm mode by defining swarm services for the database, WebLogic Admin Server and WebLogic managed servers.GoalsCreate FMW Cluster 12.2 that runs on Docker Swarm 1.12User Swarm to support scaling in/out of the FMW cluster.IntroductionFMW deployments can be complex to setup and configure.  The Enterprise Deployment Guide for SOA Suite 12.2.1 has a lots of steps to follow within its 382 pages.  This is taken care of for you when you subscribe to FMW on the cloud, such as Oracle SOA Cloud Service.  However many companies still run their FMW deployments in on premise systems and are not yet ready to move to the Oracle Cloud Machine, which provides a private cloud environment that looks just like the public cloud.Docker provides a lightweight container in which to run software.  Multiple docker images can be coordinated by Docker Swarm to form a cluster.  Each image runs in a docker container which virtualizes the resources used by the image, making it possible for multiple images to execute on a single machines or for images to to be distributed across multiple machines.  In either case Swarm helps to manage the cluster of machines.There are a number of blog entries that explain how to use FMW with docker but most of them only use the developer install, which can run in a single docker container.  A more normal FMW installation has a separate database and multiple managed servers.  There a few blog entries that describe using this, but they use old versions of docker and there are now better ways of implementing this.  We use the latest versions of docker and swarm to show to create FMW clusters running on docker swarm in a configuration that is production ready.Over this and the next few blog entries I have partnered with my colleague Shuxuan Nie to describe how to run a highly FMW deployment on top of Docker.  We have used Oracle Service Bus as an example FMW deployment but the principles can be applied to any FMW deployment.AuthorsShuxuan ‘Judy’ NieHolding a masters degree in computer science from Beijing University of Aeronautics and Astronautics Shuxuan Nie is a senior principal devops engineer working on Oracle's forthcoming Adaptive Intelligence Applications Cloud Service (AIA).  Prior to joining Oracle, Shuxuan worked as a systems architect at Australian based Oracle partner Rubicon Red.  There she specialized in helping customers build highly available FMW environments on both physical hardware and VMs.  She also contributed to Rubicon Reds Myst platform provisioning and continuous delivery tool.  In her current position she works with Oracle Container Cloud Service (which provides docker in the cloud) to provide deployment environments for the AIA cloud service.Antony ReynoldsA graduate of Bristol University Antony Reynolds is a Product Strategy Director in Oracle’sIntegration Products group where he works with Oracle Integration Cloud Service (ICS).  In addition to working with ICS he also helps customers upgrade to the latest SOA Suite releases, working closely with some of the largest users of SOA Suite in the world.  A blogger and author of several books on Oracle SOA Suite Antony was attracted to Docker by its lightweight container model and power of abstraction.ArchitectureFMW and WebLogic rely on a database and an Admin Server.  This leads us to require the following Docker containers to implement FMW on Docker.  Each docker container runs a docker image.  The beauty of Docker is that these containers can be deployed together on a single machine or distributed across multiple machines without any changes to their configuration.Docker Containers for FMW ClusterWe could deploy all the containers on a single machine as shown below.Docker Containers Running FMW on a Single Powerful MachineAlternatively we can distribute them across several machines.Docker Containers Running FMW on Multiple MachinesOf course we could choose a combination of the two.Docker Containers Running FMW on Multiple Machines, some Running Multiple ContainersNote that from the docker containers perspective each thinks it is running in its own physical machine, although it is actually running in a docker container on a physical or virtual machines and may be sharing that machine with other running docker containers.Initially we deployed all the containers on a single machine, once we had that working then we deployed our contains to multiple machines.Docker & Swarm for FMW PeopleWe are used to running FMW on multiple virtual machines.  This is used extensively on Oracle Exalogic for example, as well as the large number of customers who run FMW on Oracle Virtual Machine (OVM) or Dell’s VMWare.  The problem with VMs is that each virtual machine has a whole operating system that is running on top of the VM hypervisor.  This includes the kernel and file system which makes a VM a relatively heavyweight container.In contrast to a Virtual Machine a Docker Container just abstracts the CPU, memory, network and file system of the underlying operating system.  It does not run a separate kernel, instead deferring most of the execution to the underlying machine,m physical or virtual.  This results in a lighter weight container that still has the benefits of isolation and abstraction that a VM provides.  This is why people are excited about using docker containers to host services, particularly micro services, that would be too expensive to dedicate a VM to.A docker image is a specific set of binaries and configuration that is targeted to run inside a docker container.  This could be an Oracle database, a WebLogic managed server or a WebLogic admin server.So let's translate FMW / WebLogic deployment terms onto docker concepts.  Note that this is a conceptual mapping, but is not an exact equivalent.FMW ConceptDocker ConceptNotesMachineDocker EngineDocker engine can run multiple docker containers on a single host.ClusterSwarm ClusterDocker swarm clusters can be used to run multiple instances of an image, providing a great platform to run an FMW cluster on top of.Node ManagerSwarm ManagerManages the swarm cluster and decides on which docker engine to run a particular container.  Note that this function is split between Weblogic Console and Node Manager in WebLogic.Binaries and Domain ConfigurationDocker ImageA docker image represents a set of code and associated configuration packaged up for execution in a docker container.Server instanceDocker ContainerA running image instance, parameters can be used to customize the instance, so that the same image can be used to run an Admin server or a Managed Server.Managed WebLogic Server in a ClusterSwarm ServiceA swarm service is a generic service.  A WebLogic Admin Server would be a Swarm Service with a single instance.  A Managed Server would be a Swarm Service that allowed multiple instances.Note that although FMW allows multiple servers to be targeted at a single machine, docker achieves this through two level, a docker container runs a single server, but a docker engine can run multiple containers..  This is not as limiting as it sounds because a docker container is much lighter than a VM and so it is easy to run multiple docker containers on a docker engine running on a single VM or physical machine with minimal overhead on top of the requirements of the WebLogic server.FMW for Docker PeopleOracle Fusion Middleware is the name for Oracle’s middleware products.  These are powerful proprietary middleware products that are used by customers throughout the world to; run applications and portals, integrate systems, provide business analytics and secure their environments, amongst other things.Although many of these products are available in free developer editions they still require acknowledgement of the Oracle license and as such cannot be distributed as part of a docker image.  These products can also be large, often requiring several gigabytes of download before being installed.  These factors when combined have led Oracle and others to provide dockerfiles to create docker images that install and configure Oracle products into provided base images.  The software to be installed must be provided separately from the dockerfiles.  The created docker images can then stored in a company specific docker repository.  These docker images can then be used in the same way as any other docker image.  This is the approach that will be followed in this document.Base VM ImageWe built on a headless and minimum rpm installed Oracle Linux 7 VM that has a download footprint of only 1GB.  The instructions provided are applicable to other OL 7 and RHEL 7 environments.  With minimal changes the same instructions should also apply to other Linux releases such as Ubuntu.  The image is configured and set up with a docker environment.The VM is used as our base platform to run docker.Downloading VMBecause of licensing restrictions we are unable to provide a VM for download.  However if you are an Oracle employee then please contact us and we will point you at a downloadable image.Image Pre-RequisitesIf you don’t use the provided image then the following is needed of an OL 7 image.As root run yum util to enable UEKR4 and addons in yum repoyum -y install yum-utilsyum-config-manager --enable ol7_addonsyum-config-manager --enable ol7_UEKR4yum-config-manager --disable ol7_UEKR3_latestInstall 4.x kernel and reboot the system, selecting the UEK R4 kernel if this is not the default boot kernelyum updatesystemctl rebootInstall minimum required packagesyum install wget unzip net-toolsProvide btrfs file system as storage driver used by docker instead of default device mapper.  We will refer to the root of the btrfs file system as /var/lib/dockerUse yum to install btrfs-progs packageroot@docker-base ~]# yum install btrfs-progsCreate btrfs file system on the device, such as /dev/sdb in this example.root@docker-base ~]# mkfs.btrfs /dev/sdbDisplay UUID of the device via blkid command[root@docker-base ~]# blkid /dev/sdb/dev/sdb: UUID="fe6bf7a2-34fc-4800-af2b-da19d2d8db6d" UUID_SUB="e955087d-80b5-4051-bb28-ed71edb9cea4" TYPE="btrfs"Create the config file /etc/systemd/system/var-lib-docker.mount with below content, replace your UUID value:[Unit]Description = Docker Image Store[Mount]What = UUID=<Your UUID Value>Where = /var/lib/dockerType = btrfs[Install]WantedBy = multi-user.targetThe above file defines systemd users to mount the file system on /var/lib/docker, need to create this folder if this is the fresh installation[root@docker-base ~]# mkdir /var/lib/dockerEnable var-lib-docker.mount target and mount the file system[root@docker-base ~]# systemctl enable var-lib-docker.mount[root@docker-base ~]# systemctl start var-lib-docker.mountCreate the  /etc/systemd/system/docker.service.d/var-lib-docker-mount.conf to tell systemd to mount the /var/lib/docker file system by using the var-lib-docker.mount target before starting the docker service[Unit]Requires=var-lib-docker.mountAfter=var-lib-docker.mountSet the SELinux mode to permissive or disable SELinux. In this demo, we disable SELinux in /etc/selinux/config.  We do this to avoid having to configure selinux policies to run docker.  Your Linux security team may require you to set up an appropriate policy.# This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:#     enforcing - SELinux security policy is enforced.#     permissive - SELinux prints warnings instead of enforcing.#     disabled - No SELinux policy is loaded.SELINUX=disabled# SELINUXTYPE= can take one of three two values:#     targeted - Targeted processes are protected,#     minimum - Modification of targeted policy. Only selected processes are protected.#     mls - Multi Level Security protection.SELINUXTYPE=targeted                The above change requires to reboot the system[root@docker-base ~]# systemctl rebootInstall docker engine, in our demo we install latest version 1.12[root@docker-base ~]# yum install docker-engineStart docker service and set to auto start[root@docker-base ~]# systemctl start docker[root@docker-base ~]# systemctl enable dockerCheck docker running status [root@docker-base ~]# systemctl status docker● docker.service - Docker Application Container Engine   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)  Drop-In: /etc/systemd/system/docker.service.d           └─docker-network.conf, docker-sysconfig.conf, http-proxy.conf, var-lib-docker-mount.conf   Active: active (running) since Sun 2017-01-08 12:16:38 PST; 52min ago     Docs: https://docs.docker.com Main PID: 871 (dockerd)   Memory: 64.7M   CGroup: /system.slice/docker.service           ├─ 871 dockerd --selinux-enabled           └─1020 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0...Display docker information, you can verify the version and btrfs storage driver[root@docker-base ~]# docker infoContainers: 0 Running: 0 Paused: 0 Stopped: 0Images: 0Server Version: 1.12.2Storage Driver: btrfs Build Version: Btrfs v3.19.1 Library Version: 101Logging Driver: json-fileCgroup Driver: cgroupfsPlugins: Volume: local Network: null host bridge overlaySwarm: inactiveRuntimes: runcDefault Runtime: runcSecurity Options: seccompKernel Version: 4.1.12-61.1.24.el7uek.x86_64Operating System: Oracle Linux Server 7.3OSType: linuxArchitecture: x86_64CPUs: 1Total Memory: 4.133 GiBName: docker-base.us.oracle.comID: ELNP:SGZ6:MUMH:FWOC:3BZ2:NZOA:KBKQ:3RT2:ADS4:AX5T:M5YT:RLXRDocker Root Dir: /var/lib/dockerDebug Mode (client): falseDebug Mode (server): falseRegistry: https://index.docker.io/v1/Insecure Registries: 127.0.0.0/8Create oracle useruseradd oracleAdd oracle user to docker groupusermod -a -G docker oracleSummaryIn this entry we have introduced the concept of using Docker Swarm 1.12 to create a Fusion Middleware Cluster.  We have explained how to set up a minimal Linux machine to run a docker engine and prepared it to start building out a Fusion Middleware Cluster running on Swarm.  In our next entry we will cover creating a database docker image that we will use in later entries to support Fusion Middleware.  Later entries will explain how to install Fusion Middleware and create a cluster in swarm mode by defining swarm services for the database, WebLogic Admin Server and WebLogic managed servers.

Click here for a Google Docs version of this document that doesn't suffer from the Oracle blog formatting problems   Oracle Fusion Middleware Deployments Using Docker Swarm Part I Overview This is the...

SOA Suite

Slicing the EDG

Different SOA Domain ConfigurationsIn this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages.Super DomainThis is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain.This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies.Key PointsSeparate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable.JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.Separate Coherence servers allow OSB cache to be offloaded from OSB servers.Use of Coherence by other components as a shared infrastructure data grid service.Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.BenefitsSingle Administration Point (1 Admin Server)Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.Coherence grid can be scaled independent of OSB/SOA.JMS queues provide for inter-application communication.DrawbacksPatching is an all or nothing affair.Startup time for SOA may be slow if large number of composites deployed.Multiple DomainsThis extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc.SOA & BAM are kept in the same domain as little benefit is obtained by separating them.Key PointsSeparate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.Separate Coherence servers allow OSB cache to be offloaded from OSB servers.Use of Coherence by other components as a shared infrastructure data grid service.Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.BenefitsFollows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.Coherence grid can be scaled independent of OSB/SOA.JMS queues provide for inter-application communication.Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.All domains use same OWSM policy store (MDS-WSM).DrawbacksMultiple domains to manage and configure.Multiple Admin servers (single view requires use of Grid Control)Multiple Admin servers/WSM clusters waste resources.Additional homes needed to enjoy benefits of separate patching.Cross domain trust needs setting up to simplify cross domain interactions.Startup time for SOA may be slow if large number of composites deployed.Shared Service EnvironmentThis model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups.Key PointsSeparate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.Separate Coherence servers allow OSB cache to be offloaded from OSB servers.Use of Coherence by other components as a shared infrastructure data grid serviceCoherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.Shared SOA Domain hostsHuman Workflow TasksBAMCommon "utility" compositesSingle OSB domain provides "Enterprise Service Bus"All domains use same OWSM policy store (MDS-WSM)BenefitsFollows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.Coherence grid can be scaled independent of OSB/SOA.JMS queues provide for inter-application communication.Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.All domains use same OWSM policy store (MDS-WSM).Supports large numbers of deployed composites in multiple domains.Single URL for Human Workflow end users.Single URL for BAM end users.DrawbacksMultiple domains to manage and configure.Multiple Admin servers (single view requires use of Grid Control)Multiple Admin servers/WSM clusters waste resources.Additional homes needed to enjoy benefits of separate patching.Cross domain trust needs setting up to simplify cross domain interactions.Human Workflow needs to be specially configured to point to shared services domain.SummaryThe alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a...

SOA Suite

Coherence Adapter Configuration

SOA Suite 12c Coherence AdapterThe release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite.  In this entry I provide an introduction to configuring the adapter and using the different operations it supports.The Coherence Adapter provides access to Oracles Coherence Data Grid.  The adapter provides access to the cache capabilities of the grid, it does not currently support the many other features of the grid such as entry processors – more on this at the end of the blog.Previously if you wanted to use Coherence from within SOA Suite you either used the built in caching capability of OSB or resorted to writing Java code wrapped as a Spring component.  The new adapter significantly simplifies simple cache access operations.ConfigurationWhen creating a SOA domain the Coherence adapter is shipped with a very basic configuration that you will probably want to enhance to support real requirements.  In this section I look at the configuration required to use Coherence adapter in the real world.Activate AdapterThe Coherence Adapter is not targeted at the SOA server by default, so this targeting needs to be performed from within the WebLogic console before the adapter can be used.Create a cache configuration fileThe Coherence Adapter provides a default connection factory to connect to an out-of-box Coherence cache and also a cache called adapter-local.  This is helpful as an example but it is good practice to only have a single type of object within a Coherence cache, so we will need more than one.  Without having multiple caches then it is hard to clean out all the objects of a particular type.  Having multiple caches also allows us to specify different properties for each cache.  The following is a sample cache configuration file used in the example.<?xml version="1.0"?><!DOCTYPE cache-config SYSTEM "cache-config.dtd"><cache-config>  <caching-scheme-mapping>    <cache-mapping>      <cache-name>TestCache</cache-name>      <scheme-name>transactional</scheme-name>    </cache-mapping>  </caching-scheme-mapping>  <caching-schemes>    <transactional-scheme>      <scheme-name>transactional</scheme-name>      <service-name>DistributedCache</service-name>      <autostart>true</autostart>    </transactional-scheme>  </caching-schemes></cache-config>This defines a single cache called TestCache.  This is a distributed cache, meaning that the entries in the cache will distributed across the grid.  This enables you to scale the storage capacity of the grid by adding more servers.  Additional caches can be added to this configuration file by adding additional <cache-mapping> elements.The cache configuration file is reference by the adapter connection factory and so needs to be on a file system accessed by all servers running the Coherence Adapter.  It is not referenced from the composite.Create a Coherence Adapter Connection FactoryWe find the correct cache configuration by using a Coherence Adapter connection factory.  The adapter ships with a few sample connection factories but we will create new one.  To create a new connection factory we do the following:On the Outbound Connection Pools tab of the Coherence Adapter deployment we select New to create the adapter. Choose the javax.resource.cci.ConnectionFactory group. Provide a JNDI name, although you can use any name something along the lines of eis/Coherence/Test is a good practice (EIS tells us this an adapter JNDI, Coherence tells us it is the Coherence Adapter, and then we can identify which adapter configuration we are using). If requested to create a Plan.xml then make sure that you save it in a location available to all servers. From the outbound connection pool tab select your new connection factory so that you can configure it from the properties tab. Set the CacheConfigLocation to point to the cache configuration file created in the previous section. Set the ClassLoaderMode to CUSTOM. Set the ServiceName to the name of the service used by your cache in the cache configuration file created in the previous section. Set the WLSExtendProxy to false unless your cache configuration file is using an extend proxy. If you plan on using POJOs (Plain Old Java Objects) with the adapter rather than XML then you need to point the PojoJarFile at the location of a jar file containing your POJOs. Make sure to press enter in each field after entering your data.  Remember to save your changes when done.You may will need to stop and restart the adapter to get it to recognize the new connection factory.OperationsTo demonstrate the different operations I created a WSDL with the following operations:put – put an object into the cache with a given key value. get – retrieve an object from the cache by key value. remove – delete an object from the cache by key value. list – retrieve all the objects in the cache. listKeys – retrieve all the keys of the objects in the cache. removeAll – remove all the objects from the cache.I created a composite based on this WSDL that calls a different adapter reference for each operation.  Details on configuring the adapter within a composite are provided in the Configuring the Coherence Adapter section of the documentation.I used a Mediator to map the input WSDL operations to the individual adapter references.SchemaThe input schema is shown below.This type of pattern is likely to be used in all XML types stored in a Coherence cache.  The XMLCacheKey element represents the cache key, in this schema it is a string, but could be another primitive type.  The other fields in the cached object are represented by a single XMLCacheContent field, but in a real example you are likely to have multiple fields at this level.  Wrapper elements are provided for lists of elements (XMLCacheEntryList) and lists of cache keys (XMLCacheEntryKeyList).  XMLEmpty is used for operation that don’t require an input.Put OperationThe put operation takes an XMLCacheEntry as input and passes this straight through to the adapter.  The XMLCacheKey element in the entry is also assigned to the jca.coherence.key property.  This sets the key for the cached entry.  The adapter also supports automatically generating a key, which is useful if you don’t have a convenient field in the cached entity.  The cache key is always returned as the output of this operation.Get OperationThe get operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be retrieved.Remove OperationThe remove operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be deleted.RemoveAll OperationThis is similar to the remove operation but instead of using a key as input to the remove operation it uses a filter.  The filter could be overridden by using the jca.coherence.filter property but for this operation it was permanently set in the adapter wizard to be the following query:key() != ""This selects all objects whose key is not equal to the empty string.  All objects should have a key so this query should select all objects for deletion.Note that there appears to be a bug in the return value.  The return value is entry rather than having the expected RemoveResponse element with a Count child element.  Note the documentation states thatWhen using a filter for a Remove operation, the Coherence Adapter does not report the count of entries affected by the remove operation, regardless of whether the remove operation is successful. When using a key to remove a specific entry, the Coherence Adapter does report the count, which is always 1 if a Coherence Remove operation is successful.Although this could be interpreted as meaning an empty part is returned, an empty part is a violation of the WSDL contract. List OperationThe list operation takes no input and returns the result list returned by the adapter.  The adapter also supports querying using a filter.  This filter is essentially the where clause of a Coherence Query Language statement.  When using XML types as cached entities then only the key() field can be tested, for example using a clause such as:key() LIKE “Key%1”This filter would match all entries whose key starts with “Key” and ends with “1”.ListKeys OperationThe listKeys operation is essentially the same as the list operation except that only the keys are returned rather than the whole object.TestingTo test the composite I used the new 12c Test Suite wizard to create a number of test suites.  The test suites should be executed in the following order:CleanupTestSuite has a single test that removes all the entries from the cache used by this composite. InitTestSuite has 3 tests that insert a single record into the cache.  The returned key is validated against the expected value. MainTestSuite has 5 tests that list the elements and keys in the cache and retrieve individual inserted elements.  This tests that the items inserted in the previous test are actually in the cache.  It also tests the get, list and listAll operations and makes sure they return the expected results. RemoveTestSuite has a single test that removes an element from the cache and tests that the count of removed elements is 1. ValidateRemoveTestSuite is similar to MainTestSuite but verifies that the element removed by the previous test suite has actually been removed.Use CaseOne example of using the Coherence Adapter is to create a shared memory region that allows SOA composites to share information.  An example of this is provided by Lucas Jellema in his blog entry First Steps with the Coherence Adapter to create cross instance state memory.However there is a problem in creating global variables that can be updated by multiple instances at the same time.  In this case the get and put operations provided by the Coherence adapter support a last write wins model.  This can be avoided in Coherence by using an Entry Processor to update the entry in the cache, but currently entry processors are not supported by the Coherence Adapter.  In this case it is still necessary to use Java to invoke the entry processor.Sample CodeThe sample code I refer to above is available for download and consists of two JDeveloper projects, one with the cache config file and the other with the Coherence composite.CoherenceConfig has the cache config file that must be referenced by the connection factory properties. CoherenceSOA has a composite that supports the WSDL introduced at the start of this blog along with the test cases mentioned at the end of the blog.The Coherence Adapter is a really exciting new addition to the SOA developers toolkit, hopefully this article will help you make use of it.

SOA Suite 12c Coherence Adapter The release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite.  In this entry I provide...

SOA Suite

One Queue to Rule them All

Using a Single Queue for Multiple Message Types with SOA SuiteProblem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue. BackgroundThe JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.Note that message type, including XML document type, has nothing to do with the preceding statements.The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.SolutionsIn order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application: Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue. Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue. First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows: aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector. The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector. Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options: We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.Special Case - Request-Reply Synchronous JMS AdapterWhen using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.SampleI developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.   The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate". ModelClientBPELServiceInbound RequestClient receives web service request and posts the request to the inbound queue with JMSType='Initiate'The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.The BPEL process then sends a request to Service 1 on the outbound queue.Key Points Initiate message can be used to initate a correlation set if necessary Selector required to distinguish initiate messages from other messages on the queueService 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID.Separate Request and Reply Adapters The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.The BPEL process then sends a request to Service 2 on the outbound queue.Key Points Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance Selector required to distinguish service 1 response messages from other messages on the queueService 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.Asynchronous Request-Reply Adapter The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.Key Point Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance Selector still required to distinguish service 2 response messages from other messages on the queueThe BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.Synchronous Request-Reply Adapter The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.Key Points Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID Outbound ResponseClient receives the response on an outbound queue.   SummaryWhen using a single JMS queue for multiple purposes bear in mind the following:If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector. When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID. When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID. Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction. Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.Deploying the SampleThe sample is available to download here and makes use of the following JMS resources:JNDIResource;Notesjms/TestQueueQueueOutbound queue from the BPEL processjms/TestQueue2QueueInbound queue to the BPEL processeis/wls/TestQueueJMS Adapter Connector FactoryThis can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactoryeis/wls/TestQueueNone-XA JMS Adapter Connector FactoryThis must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”To run the sample then just use the test facility in the EM console or the soa-infra application.

Using a Single Queue for Multiple Message Types with SOA Suite Problem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue...

Fusion Middleware

Not Just a Cache

Coherence as a Compute GridCoherence is best known as a data grid, providing distributed caching with an ability to move processing to the data in the grid.  Less well known is the fact that Coherence also has the ability to function as a compute grid, distributing work across multiple servers in a cluster.  In this entry, which was co-written with my colleague Utkarsh Nadkarni, we will look at using Coherence as a compute grid through the use of the Work Manager API and compare it to manipulating data directly in the grid using Entry Processors.Coherence Distributed Computing OptionsThe Coherence documentation identifies several methods for distributing work across the cluster, see Processing Data in a Cache.  They can be summarized as:Entry Processors An InvocableMap interface, inherited by the NamedCache interface, provides support for executing an agent (EntryProcessor or EntryAggregator) on individual entries within the cache. The entries may or may not exist, either way the agent is executed once for each key provided, or if no key is provided then it is executed once for each object in the cache. In Enterprise and Grid editions of Coherence the entry processors are executed on the primary cache nodes holding the cached entries. Agents can return results. One agent executes multiple times per cache node, once for each key targeted on the node.Invocation Service An InvocationService provides support for executing an agent on one or more nodes within the grid. Execution may be targeted at specific nodes or at all nodes running the Invocation Service. Agents can return results. One agent executes once per node.Work Managers A WorkManager class provides a grid aware implementation of the commonJ WorkManager which can be used to run tasks across multiple threads on multiple nodes within the grid. WorkManagers run on multiple nodes. Each WorkManager may have multiple threads. Tasks implement the Work interface and are assigned to specific WorkManager threads to execute. Each task is executed once.Three Models of Distributed ComputationThe previous section listing the distributed computing options in Coherence shows that there are 3 distinct execution models:Per Cache Entry Execution (Entry Processor) Execute the agent on the entry corresponding to a cache key. Entries processed on a single thread per node. Parallelism across nodes.Per Node Execution (Invocation Service) Execute the same agent once per node. Agent processed on a single thread per node. Parallelism across nodes.Per Task Execution (Work Manager) Each task executed once. Parallelism across nodes and across threads within a node.The entry processor is good for operating on individual cache entries.  It is not so good for working on groups of cache entries.The invocation service is good for performing checks on a node, but is limited in its parallelism.The work manager is good for operating on groups of related entries in the cache or performing non-cache related work in parallel.  It has a high degree of parallelism.As you can see the primary choice for distributed computing comes down to the Work Manager and the Entry Processor.Differences between using Entry Processors and Work Managers in CoherenceAspectEntry ProcessorsWork ManagersDegree of parallelizationIs a function of the number of Coherence nodes. EntryProcessors are run concurrently across all nodes in a cluster. However, within each node only one instance of the entry processor executes at a time.Is a function of the number of Work Manager threads. The Work is run concurrently across all threads in all Work Manager instances.TransactionalityTransactional. If an EntryProcessor running on one node does not complete (say, due to that node crashing), the entries targeted will be executed by an EntryProcessor on another node.Not transactional. The specification does not explicitly specify what the response should be if a remote server crashes during an execution. Current implementation uses WORK_COMPLETED with WorkCompletedException as a result. In case a Work does not run to completion, it is the responsibility of the client to resubmit the Work to the Work Manager. How is the Cache accessed or mutated?Operations against the cache contents are executed by (and thus within the localized context of) a cache.Accesses and changes to the cache are done directly through the cache API.Where is the processing performed?In the same JVM where the entries-to-be-processed reside.In the Work Manager server. This may not be the same JVM where the entries-to-be-processed reside.Network TrafficIs a function of the size of the EntryProcessor. Typically, the size of an EntryProcessor is much smaller than the size of the data transferred across nodes in the case of a Work Manager approach. This makes the EntryProcessor approach more network-efficient and hence more scalable. One EntryProcessor is transmitted to each cache node.Is a function of the Number of Work Objects, of which multiple may be sent to each server. Size of the data set transferred from the Backing Map to the Work Manager Server.Distribution of “Tasks”Tasks are moved to the location at which the entries-to-be-processed are being managed. This may result in a random distribution of tasks. The distribution tends to get equitable as the number of entries increases.Tasks are distributed equally across the threads in the Work Manager Instances.Implementation of the EntryProcessor or Work class.Create a class that extends AbstractProcessor. Implement the process method. Update the cache item based on the key passed in to the process method.Create a class that is serializable and implements commonj.work.Work. Implement the run method.Implementation of “Task”In the process method, update the cache item based on the key passed into the process method.In the run method, do the following: Get a reference to the named cache Do the Work – Get a reference to the Cache Item; change the cache item; put the cache item back into the named cache.Completion NotificationWhen the NamedCache.invoke method completes then all the entry processors have completed executing.When a task is submitted for execution it executes asynchronously on the work manager threads in the cluster.  Status may be obtained by registering a commonj.work.WorkListener class when calling the WorkManager.schedule method.  This will provide updates when the Work is accepted, started and completed or rejected.  Alternatively the WorkManager.waitForAll and WorkManager.waitForAny methods allow blocking waits for either all or one result respectively.Returned Resultsjava.lang.Object – when executed on one cache item. This returns result of the invocation as returned from the EntryProcessor.java.util.Map – when executed on a collection of keys. This returns a Map containing the results of invoking the EntryProcessor against each of the specified keys. commonj.work.WorkItem - There are three possible outcomes The Work is not yet complete. In this case, a null is returned by WorkItem.getResult. The Work started but completed with an exception. This may have happened due to a Work Manager Instance terminating abruptly. This is indicated by an exception thrown by WorkItem.getResult. The Work Manager instance indicated that the Work is complete and the Work ran to completion. In this case, WorkItem.getResult returns a non-null and no exception is thrown by WorkItem.getResult.Error HandlingFailure of a node results in all the work assigned to that node being executed on the new primary. This may result in some work being executed twice, but Coherence ensures that the cache is only updated once per item.Failure of a node results in the loss of scheduled tasks assigned to that node. Completed tasks are sent back to the client as they complete.Fault Handling ExtensionEntry processors have excellent error handling within Coherence.  Work Managers less so.  In order to provide resiliency on node failure I implemented a “RetryWorkManager” class that detects tasks that have failed to complete successfully and resubmits them to the grid for another attempt.A JDeveloper project with the RetryWorkManager is available for download here.  It includes sample code to run a simple task across multiple work manager threads.To create a new RetryWorkManager that will retry failed work twice then you would use this: WorkManager = new RetryWorkManager("WorkManagerName", 2);  // Change for number of retries, if no retry count is provided then the default is 0.You can control the number of retries at the individual work level as shown below: WorkItem workItem = schedule(work); // Use number of retries set at WorkManager creationWorkItem workItem = schedule(work, workListener); // Use number of retries set at WorkManager creationWorkItem workItem = schedule(work, 4); // Change number of retriesWorkItem workItem = schedule(work, workListener, 4); // Change number of retriesCurrently the RetryWorkManager defaults to having 0 threads.  To change use this constructor: WorkItem workItem = schedule(work, workListener, 3, 4); // Change number of threads (3) and retries (4)Note that none of this sample code is supported by Oracle in any way, and is provided purely as a sample of what can be done with Coherence. How the RetryWorkManager WorksThe RetryWorkManager delegates most operations to a Coherence WorkManager instance.  It creates a WorkManagerListener to intercept status updates.  On receiving a WORK_COMPLETED callback the listener checks the result to see if the completion is due to an error.  If an error occurred and there are retries left then the work is resubmitted.  The WorkItem returned by scheduling an event is wrapped in a RetryWorkItem.  This RetryWorkItem is updated with a new Coherence WorkItem when the task is retried.  If the client registers a WorkManagerListener then the RetryWorkManagerListener delegates non-retriable events to the client listener.  Finally the waitForAll and waitForAny methods are modified to deal with work items being resubmitted in the event of failure.Sample Code for EntryProcessor and RetryWorkManagerThe downloadable project contains sample code for running the work manager and an entry processor.The demo implements a 3-tier architecture Coherence Cache Servers Can be started by running RunCacheServer.cmd Runs a distributed cache used by the Task to be executed in the gridCoherence Work Manager Servers Can be started by running RunWorkManagerServer.cmd Takes no parameters Runs two threads for executing tasksCoherence Work Manager Clients Can be started by running RunWorkManagerClient.cmd Takes three parameters currently Work Manager name - should be "AntonyWork" - default is "AntonyWork" Number of tasks to schedule - default is 10 Time to wait for tasks to complete in seconds - default is 60The task stores the number of times it has been executed in the cache, so multiple runs will see the counter incrementing.  The choice between EntryProcessor and WorkManager is controlled by changing the value of USE_ENTRY_PROCESSOR between false and true in the RunWorkManagerClient.cmd script.The SetWorkManagerEnv.cmd script should be edited to point to the Coherence home directory and the Java home directory.SummaryIf you need to perform operations on cache entries and don’t need to have cross-checks between the entries then the best solution is to use an entry processor.  The entry processor is fault tolerant and updates to the cached entity will be performed once only.If you need to perform generic work that may need to touch multiple related cache entries then the work manager may be a better solution.  The extensions I created in the RetryWorkManager provide a degree of resiliency to deal with node failure without impacting the client.The RetryWorkManager can be downloaded here.

Coherence as a Compute Grid Coherence is best known as a data grid, providing distributed caching with an ability to move processing to the data in the grid.  Less well known is the fact that Coherence...

SOA Suite

The Impact of Change

Measuring Impact of Change in SOA SuiteMormon prophet Thomas S. Monson once said: When performance is measured, performance improves. When performance is measured and reported, the rate of performance accelerates. (LDS Conference Report, October 1970, p107)Like everything in life, a SOA Suite installation that is monitored and tracked has a much better chance of performing well than one that is not measured.  With that in mind I came up with tool to allow the measurement of the impact of configuration changes on database usage in SOA Suite.  This tool can be used to assess the impact of different configurations on both database growth and database performance, helping to decide which optimizations offer real benefit to the composite under test.Basic ApproachThe basic approach of the tool is to take a snapshot of the number of rows in the SOA tables before executing a composite.  The composite is then executed.  After the composite has completed another snapshot is taken of the SOA tables.  This is illustrated in the diagram below:An example of the data collected by the tool is shown below:Test NameTotal Tables ChangedTotal Rows AddedNotesAsyncTest11315Async Interaction with simple SOA composite, one retry to send response.AsyncTest21213Async interaction with simple SOA composite, no retries on sending response.AsyncTest31213Async interaction with simple SOA composite, no callback address provided.OneWayTest11213One-Way interaction with simple SOA composite.SyncTest177Sync interaction with simple SOA composite.Note that the first three columns are provided by the tool, the fourth column is just an aide-memoir to identify what the test name actually did. The tool also allows us to drill into the data to get a better look at what is actually changing as shown in the table below:Test NameTable NameRows AddedAsyncTest1AUDIT_COUNTER1AsyncTest1AUDIT_DETAILS1AsyncTest1AUDIT_TRAIL2AsyncTest1COMPOSITE_INSTANCE1AsyncTest1CUBE_INSTANCE1AsyncTest1CUBE_SCOPE1AsyncTest1DLV_MESSAGE1AsyncTest1DOCUMENT_CI_REF1AsyncTest1DOCUMENT_DLV_MSG_REF1AsyncTest1HEADERS_PROPERTIES1AsyncTest1INSTANCE_PAYLOAD1AsyncTest1WORK_ITEM1AsyncTest1XML_DOCUMENT2Here we have drilled into the test case with the retry of the callback to see what tables are actually being written to.Finally we can compare two tests to see difference in the number of rows written and the tables updated as shown below:Test NameBase Test NameTable NameRow DifferenceAsyncTest1AsyncTest2AUDIT_TRAIL1Here are the additional tables referenced by this testTest NameBase Test NameAdditional Table NameRows AddedAsyncTest1AsyncTest2WORK_ROWS1How it WorksI created a database stored procedure, soa_snapshot.take_soa_snaphot(test_name, phase). that queries all the SOA tables and records the number of rows in each table.  By running the stored procedure before and after the execution of a composite we can capture the number of rows in the SOA database before and after a composite executes.  I then created a view that shows the difference in the number of rows before and after composite execution.  This view has a number of sub-views that allow us to query specific items.  The schema is shown below:The different tables and views are:CHANGE_TABLE Used to track number of rows in SOA schema, each test case has two or more phases.  Usually phase 1 is before execution and phase 2 is after execution. This only used by the stored procedure and the views.DELTA_VIEW Used to track changes in number of rows in SOA database between phases of a test case.  This is a view on CHANGE_TABLE.  All other views are based off this view.SIMPLE_DELTA_VIEW Provides number of rows changed in each table.SUMMARY_DELTA_VIEW Provides a summary of total rows and tables changed.DIFFERENT_ROWS_VIEW Provides a summary of differences in rows updated between test casesEXTRA_TABLES_VIEW Provides a summary of the extra tables and rows used by a test case. This view makes use of a session context, soa_ctx, which holds the test case name and the baseline test case name.  This context is initialized by calling the stored procedure soa_ctx_pkg.set(testCase, baseTestCase).I created a web service wrapper to the take_soa_snapshot procedure so that I could use SoapUI to perform the tests.Sample OutputHow many rows and tables did a particular test use?Here we can see how many rows in how many tables changed as a result of running a test:-- Display the total number of rows and tables changed for each testselect * from summary_delta_vieworder by test_name;TEST_NAME            TOTALDELTAROWS TOTALDELTASIZE TOTALTABLES-------------------- -------------- -------------- -----------AsyncTest1                   15              0          13 AsyncTest1noCCIS             15              0          13 AsyncTest1off                 8              0           8 AsyncTest1prod               13              0          12 AsyncTest2                   13              0          12 AsyncTest2noCCIS             13              0          12 AsyncTest2off                 7              0           7 AsyncTest2prod               11              0          11 AsyncTest3                   13              0          12 AsyncTest3noCCIS             13          65536          12 AsyncTest3off                 7              0           7 AsyncTest3prod               11              0          11 OneWayTest1                  13              0          12 OneWayTest1noCCI             13          65536          12 OneWayTest1off                7              0           7 OneWayTest1prod              11              0          11 SyncTest1                     7              0           7 SyncTest1noCCIS               7              0           7 SyncTest1off                  2              0           2 SyncTest1prod                 5              0           5 20 rows selectedWhich tables grew during a test?Here for a given test we can see which tables had rows inserted.-- Display the tables which grew and show the number of rows they grew byselect * from simple_delta_viewwhere test_name='AsyncTest1'order by table_name;TEST_NAME            TABLE_NAME                      DELTAROWS  DELTASIZE-------------------- ------------------------------ ---------- ----------AsyncTest1       AUDIT_COUNTER                           1          0 AsyncTest1       AUDIT_DETAILS                           1          0 AsyncTest1       AUDIT_TRAIL                             2          0 AsyncTest1       COMPOSITE_INSTANCE                      1          0 AsyncTest1       CUBE_INSTANCE                           1          0 AsyncTest1       CUBE_SCOPE                              1          0 AsyncTest1       DLV_MESSAGE                             1          0 AsyncTest1       DOCUMENT_CI_REF                         1          0 AsyncTest1       DOCUMENT_DLV_MSG_REF                    1          0 AsyncTest1       HEADERS_PROPERTIES                      1          0 AsyncTest1       INSTANCE_PAYLOAD                        1          0 AsyncTest1       WORK_ITEM                               1          0 AsyncTest1       XML_DOCUMENT                            2          0 13 rows selected Which tables grew more in test1 than in test2?Here we can see the differences in rows for two tests.-- Return difference in rows updated (test1)select * from different_rows_viewwhere test1='AsyncTest1' and test2='AsyncTest2';TEST1                TEST2                TABLE_NAME                          DELTA-------------------- -------------------- ------------------------------ ----------AsyncTest1       AsyncTest2       AUDIT_TRAIL                             1 Which tables were used by test1 but not by test2?Here we can see tables that were used by one test but not by the other test.-- Register base test case for use in extra_tables_view-- First parameter (test1) is test we expect to have extra rows/tablesbegin soa_ctx_pkg.set('AsyncTest1', 'AsyncTest2'); end;/anonymous block completed-- Return additional tables used by test1column TEST2 FORMAT A20select * from extra_tables_view;TEST1                TEST2                TABLE_NAME                      DELTAROWS-------------------- -------------------- ------------------------------ ----------AsyncTest1       AsyncTest2       WORK_ITEM                               1 ResultsI used the tool to find out the following.  All tests were run using SOA Suite 11.1.1.7.The following is based on a very simple composite as shown below:Each BPEL process is basically the same as the one shown below:Impact of Fault Policy Retry Being Executed OnceSettingTotal Rows Written Total Tables UpdatedNo Retry1312One Retry1513When a fault policy causes a retry then the following additional database rows are written:Table NameNumber of RowsAUDIT_TRAIL1WORK_ITEM1Impact of Setting Audit Level = Development Instead of ProductionSettingTotal Rows Written Total Tables UpdatedDevelopment1312Production1111When the audit level is set at development instead of production then the following additional database rows are written:Table NameNumber of RowsAUDIT_TRAIL1WORK_ITEM1Impact of Setting Audit Level = Production Instead of OffSettingTotal Rows Written Total Tables UpdatedProduction1111Off77When the audit level is set at production rather than off then the following additional database rows are written:Table NameNumber of RowsAUDIT_COUNTER1AUDIT_DETAILS1AUDIT_TRAIL1COMPOSITE_INSTANCE1Impact of Setting Capture Composite Instance StateSettingTotal Rows Written Total Tables UpdatedOn1312Off1312When capture composite instance state is on rather than off then no additional database rows are written, note that there are other activities that occur when composite instance state is captured:Impact of Setting oneWayDeliveryPolicy = async.cache or syncSettingTotal Rows Written Total Tables Updatedasync.persist1312async.cache77sync77When choosing async.persist (the default) instead of sync or async.cache then the following additional database rows are written:Table NameNumber of RowsAUDIT_DETAILS1DLV_MESSAGE1DOCUMENT_CI_REF1DOCUMENT_DLV_MSG_REF1HEADERS_PROPERTIES1XML_DOCUMENT1As you would expect the sync mode behaves just as a regular synchronous (request/reply) interaction and creates the same number of rows in the database.  The async.cache also creates the same number of rows as a sync interaction because it stores state in memory and provides no restart guarantee.Caveats & WarningsThe results above are based on a trivial test case.  The numbers will be different for bigger and more complex composites.  However by taking snapshots of different configurations you can produce the numbers that apply to your composites.The capture procedure supports multiple steps in a test case, but the views only support two snapshots per test case.Code DownloadThe sample project I used us available here.The scripts used to create the user (createUser.sql), create the schema (createSchema.sql) and sample queries (TableCardinality.sql) are available here.The Web Service wrapper to the capture state stored procedure is available here.The sample SoapUI project that I used to take a snapshot, perform the test and take a second snapshot is available here.

Measuring Impact of Change in SOA Suite Mormon prophet Thomas S. Monson once said: When performance is measured, performance improves. When performance is measured and reported, the rate of...

Fusion Middleware

Clustering Events

Setting up an Oracle Event Processing ClusterRecently I was working with Oracle Event Processing (OEP) and needed to set it up as part  of a high availability cluster.  OEP uses Coherence for quorum membership in an OEP cluster.  Because the solution used caching it was also necessary to include access to external Coherence nodes.  Input messages need to be duplicated across multiple OEP streams and so a JMS Topic adapter needed to be configured.  Finally only one copy of each output event was desired, requiring the use of an HA adapter.  In this blog post I will go through the steps required to implement a true HA OEP cluster.OEP High Availability ReviewThe diagram below shows a very simple non-HA OEP configuration:Events are received from a source (JMS in this blog).  The events are processed by an event processing network which makes use of a cache (Coherence in this blog).  Finally any output events are emitted.  The output events could go to any destination but in this blog we will emit them to a JMS queue.OEP provides high availability by having multiple event processing instances processing the same event stream in an OEP cluster.  One instance acts as the primary and the other instances act as secondary processors.  Usually only the primary will output events as shown in the diagram below (top stream is the primary):The actual event processing is the same as in the previous non-HA example.  What is different is how input and output events are handled.  Because we want to minimize or avoid duplicate events we have added an HA output adapter to the event processing network.  This adapter acts as a filter, so that only the primary stream will emit events to out queue.  If the processing of events within the network depends on how the time at which events are received then it is necessary to synchronize the event arrival time across the cluster by using an HA input adapter to synchronize the arrival timestamps of events across the cluster.OEP Cluster CreationLets begin by setting up the base OEP cluster.  To do this we create new OEP configurations on each machine in the cluster.  The steps are outlined below.  Note that the same steps are performed on each machine for each server which will run on that machine:Run ${MW_HOME}/ocep_11.1/common/bin/config.sh. MW_HOME is the installation directory, note that multiple Fusion Middleware products may be installed in this directory.When prompted “Create a new OEP domain”. Provide administrator credentials. Make sure you provide the same credentials on all machines in the cluster.Specify a  “Server name” and “Server listen port”. Each OEP server must have a unique name. Different servers can share the same “Server listen port” unless they are running on the same host.Provide keystore credentials. Make sure you provide the same credentials on all machines in the cluster.Configure any required JDBC data source. Provide the “Domain Name” and “Domain location”. All servers must have the same “Domain name”. The “Domain location” may be different on each server, but I would keep it the same to simplify administration. Multiple servers on the same machine can share the “Domain location” because their configuration will be placed in the directory corresponding to their server name.Create domain!Configuring an OEP ClusterNow that we have created our servers we need to configure them so that they can find each other.  OEP uses Oracle Coherence to determine cluster membership.  Coherence clusters can use either multicast or unicast to discover already running members of a cluster.  Multicast has the advantage that it is easy to set up and scales better (see http://www.ateam-oracle.com/using-wka-in-large-coherence-clusters-disabling-multicast/) but has a number of challenges, including failure to propagate by default through routers and accidently joining the wrong cluster because someone else chose the same multicast settings.  We will show how to use both unicast and multicast to discover the cluster.  Multicast DiscoveryUnicast DiscoveryCoherence multicast uses a class D multicast address that is shared by all servers in the cluster.  On startup a Coherence node broadcasts a message to the multicast address looking for an existing cluster.  If no-one responds then the node will start the cluster.Coherence unicast uses Well Known Addresses (WKAs). Each server in the cluster needs a dedicated listen address/port combination. A subset of these addresses are configured as WKAs and shared between all members of the cluster. As long as at least one of the WKAs is up and running then servers can join the cluster. If a server does not find any cluster members then it checks to see if its listen address and port are in the WKA list. If it is then that server will start the cluster, otherwise it will wait for a WKA server to become available. To configure a cluster the same steps need to be followed for each server in the cluster:Set an event server address in the config.xml file. Add the following to the <cluster> element:<cluster>    <server-name>server1</server-name>    <server-host-name>oep1.oracle.com</server-host-name></cluster> The “server-name” is displayed in the visualizer and should be unique to the server.The “server-host-name” is used by the visualizer to access remote servers.The “server-host-name” must be an IP address or it must resolve to an IP address that is accessible from all other servers in the cluster.The listening port is configured in the <netio> section of the config.xml.The server-host-name/listening port combination should be unique to each server. Set a common cluster multicast listen address shared by all servers in the config.xml file. Add the following to the <cluster> element:<cluster>    …    <!—For us in Coherence multicast only! –>    <multicast-address>239.255.200.200</multicast-address>    <multicast-port>9200</multicast-port></cluster>The “multicast-address” must be able to be routed through any routers between servers in the cluster.Optionally you can specify the bind address of the server, this allows you to control port usage and determine which network is used by CoherenceCreate a “tangosol-coherence-override.xml” file in the ${DOMAIN}/{SERVERNAME}/config directory for each server in the cluster.<?xml version='1.0'?><coherence>    <cluster-config>        <unicast-listener>            <!—This server Coherence address and port number –>            <address>192.168.56.91</address>            <port>9200</port>        </unicast-listener>    </cluster-config></coherence>Configure the Coherence WKA cluster discovery.Create a “tangosol-coherence-override.xml” file in the ${DOMAIN}/{SERVERNAME}/config directory for each server in the cluster.<?xml version='1.0'?><coherence>    <cluster-config>        <unicast-listener>            <!—WKA Configuration –>            <well-known-addresses>                <socket-address id="1">                    <address>192.168.56.91</address>                    <port>9200</port>                </socket-address>                <socket-address id="2">                    <address>192.168.56.92</address>                    <port>9200</port>                </socket-address>            </well-known-addresses>            <!—This server Coherence address and port number –>            <address>192.168.56.91</address>            <port>9200</port>        </unicast-listener>    </cluster-config></coherence> List at least two servers in the <socket-address> elements.For each <socket-address> element there should be a server that has corresponding <address> and <port> elements directly under <well-known-addresses>.One of the servers listed in the <well-known-addresses> element must be the first server started.Not all servers need to be listed in <well-known-addresses>, but see previous point. Enable clustering using a Coherence cluster. Add the following to the <cluster> element in config.xml.<cluster>    …    <enabled>true</enabled></cluster> The “enabled” element tells OEP that it will be using Coherence to establish cluster membership, this can also be achieved by setting the value to be “coherence”. The following shows the <cluster> config for another server in the cluster with differences highlighted:<cluster>    <server-name>server2</server-name>    <server-host-name>oep2.oracle.com</server-host-name>    <!—For us in Coherence multicast only! –>    <multicast-address>239.255.200.200</multicast-address>    <multicast-port>9200</multicast-port>    <enabled>true</enabled></cluster>The following shows the <cluster> config for another server in the cluster with differences highlighted:<cluster>    <server-name>server2</server-name>    <server-host-name>oep2.oracle.com</server-host-name>    <enabled>true</enabled></cluster> The following shows the “tangosol-coherence-override.xml” file for another server in the cluster with differences highlighted:<?xml version='1.0'?><coherence>    <cluster-config>        <unicast-listener>            <!—WKA Configuration –>            <well-known-addresses>                <socket-address id="1">                    <address>192.168.56.91</address>                    <port>9200</port>                </socket-address>                <socket-address id="2">                    <address>192.168.56.92</address>                    <port>9200</port>                </socket-address>                <!—This server Coherence address and port number –>                <address>192.168.56.92</address>                <port>9200</port>            </well-known-addresses>        </unicast-listener>    </cluster-config></coherence>You should now have a working OEP cluster.  Check the cluster by starting all the servers.Look for a message like the following on the first server to start to indicate that another server has joined the cluster:<Coherence> <BEA-2049108> <The domain membership has changed to [server2, server1], the new domain primary is "server1">Log on to the Event Processing Visualizer of one of the servers – http://<hostname>:<port>/wlevs.  Select the cluster name on the left and then select group “AllDomainMembers”.  You should see a list of all the running servers in the “Servers of Group – AllDomainMembers” section.Sample ApplicationNow that we have a working OEP cluster let us look at a simple application that can be used as an example of how to cluster enable an application.  This application models service request tracking for hardware products.  The application we will use performs the following checks:If a new service request (identified by SRID) arrives (indicated by status=RAISE) then we expect some sort of follow up in the next 10 seconds (seconds because I want to test this quickly).  If no follow up is seen then an alert should be raised. For example if I receive an event (SRID=1, status=RAISE) and after 10 seconds I have not received a follow up message (SRID=1, status<>RAISE) then I need to raise an alert.If a service request (identified by SRID) arrives and there has been another service request (identified by a different SRID) for the same physcial hardware (identified by TAG) then an alert should be raised. For example if I receive an event (SRID=2, TAG=M1) and later I receive another event for the same hardware (SRID=3, TAG=M1) then an alert should be raised.Note use case 1 is nicely time bounded – in this case the time window is 10 seconds.  Hence this is an ideal candidate to be implemented entirely in CQL.Use case 2 has no time constraints, hence over time there could be a very large number of CQL queries running looking for a matching TAG but a different SRID.  In this case it is better to put the TAGs into a cache and search the cache for duplicate tags.  This reduces the amount of state information held in the OEP engine.The sample application to implement this is shown below:Messages are received from a JMS Topic (InboundTopicAdapter).  Test messages can be injected via a CSV adapter (RequestEventCSVAdapter).  Alerts are sent to a JMS Queue (OutboundQueueAdapter), and also printed to the server standard output (PrintBean).  Use case 1 is implemented by the MissingEventProcessor.  Use case 2 is implemented by inserting the TAG into a cache (InsertServiceTagCacheBean) using a Coherence event processor and then querying the cache for each new service request (DuplicateTagProcessor), if the same tag is already associated with an SR in the cache then an alert is raised.  The RaiseEventFilter is used to filter out existing service requests from the use case 2 stream.The non-HA version of the application is available to download here.We will use this application to demonstrate how to HA enable an application for deployment on our cluster.A CSV file (TestData.csv) and Load generator properties file (HADemoTest.prop) is provided to test the application by injecting events using the CSV Adapter.Note that the application reads a configuration file (System.properties) which should be placed in the domain directory of each event server.Deploying an ApplicationBefore deploying an application to a cluster it is a good idea to create a group in the cluster.  Multiple servers can be members of this group.  To add a group to an event server just add an entry to the <cluster> element in config.xml as shown below:<cluster>      …      <groups>HAGroup</groups>   </cluster>Multiple servers can be members of a group and a server can be a member of multiple groups.  This allows you to have different levels of high availability in the same event processing cluster.Deploy the application using the Visualizer.  Target the application at the group you created, or the AllDomainMembers group.Test the application, typically using a CSV Adapter.  Note that using a CSV adapter sends all the events to a single event server.  To fix this we need to add a JMS output adapter (OutboundTopicAdapter) to our application and then send events from the CSV adapter to the outbound JMS adapter as shown below:So now we are able to send events via CSV to an event processor that in turn sends the events to a JMS topic.  But we still have a few challenges.Managing InputFirst challenge is managing input.  Because OEP relies on the same event stream being processed by multiple servers we need to make sure that all our servers get the same message from the JMS Topic.  To do this we configure the JMS connection factory to have an Unrestricted Client ID.  This allows multiple clients (OEP servers in our case) to use the same connection factory.  Client IDs are mandatory when using durable topic subscriptions.  We also need each event server to have its own subscriber ID for the JMS Topic, this ensures that each server will get a copy of all the messages posted to the topic.  If we use the same subscriber ID for all the servers then the messages will be distributed across the servers, with each server seeing a completely disjoint set of messages to the other servers in the cluster.  This is not what we want because each server should see the same event stream.  We can use the server name as the subscriber ID as shown in the below excerpt from our application:<wlevs:adapter id="InboundTopicAdapter" provider="jms-inbound">    …    <wlevs:instance-property name="durableSubscriptionName"            value="${com_bea_wlevs_configuration_server_ClusterType.serverName}" /></wlevs:adapter>This works because I have placed a ConfigurationPropertyPlaceholderConfigurer bean in my application as shown below, this same bean is also used to access properties from a configuration file:<bean id="ConfigBean"        class="com.bea.wlevs.spring.support.ConfigurationPropertyPlaceholderConfigurer">        <property name="location" value="file:../Server.properties"/>    </bean>With this configuration each server will now get a copy of all the events.As our application relies on elapsed time we should make sure that the timestamps of the received messages are the same on all servers.  We do this by adding an HA Input adapter to our application.<wlevs:adapter id="HAInputAdapter" provider="ha-inbound">    <wlevs:listener ref="RequestChannel" />    <wlevs:instance-property name="keyProperties"            value="EVID" />    <wlevs:instance-property name="timeProperty" value="arrivalTime"/></wlevs:adapter>The HA Adapter sets the given “timeProperty” in the input message to be the current system time.  This time is then communicated to other HAInputAdapters deployed to the same group.  This allows all servers in the group to have the same timestamp in their event.  The event is identified by the “keyProperties” key field.To allow the downstream processing to treat the timestamp as an arrival time then the downstream channel is configured with an “application-timestamped” element to set the arrival time of the event.  This is shown below:<wlevs:channel id="RequestChannel" event-type="ServiceRequestEvent">    <wlevs:listener ref="MissingEventProcessor" />    <wlevs:listener ref="RaiseEventFilterProcessor" />    <wlevs:application-timestamped>        <wlevs:expression>arrivalTime</wlevs:expression>    </wlevs:application-timestamped></wlevs:channel>Note the property set in the HAInputAdapter is used to set the arrival time of the event.So now all servers in our cluster have the same events arriving from a topic, and each event arrival time is synchronized across the servers in the cluster.Managing OutputNote that an OEP cluster has multiple servers processing the same input stream.  Obviously if we have the same inputs, synchronized to appear to arrive at the same time then we will get the same outputs, which is central to OEPs promise of high availability.  So when an alert is raised by our application it will be raised by every server in the cluster.  If we have 3 servers in the cluster then we will get 3 copies of the same alert appearing on our alert queue.  This is probably not what we want.  To fix this we take advantage of an HA Output Adapter.  unlike input where there is a single HA Input Adapter there are multiple HA Output Adapters, each with distinct performance and behavioral characteristics.  The table below is taken from the Oracle® Fusion Middleware Developer's Guide for Oracle Event Processing and shows the different levels of service and performance impact:Table 24-1 Oracle Event Processing High Availability Quality of ServiceHigh Availability OptionMissed Events?Duplicate Events?Performance OverheadSection 24.1.2.1, "Simple Failover"Yes (many)Yes (few)NegligibleSection 24.1.2.2, "Simple Failover with Buffering"Yes (few)Foot 1Yes (many)LowSection 24.1.2.3, "Light-Weight Queue Trimming"NoYes (few)Low-MediumFoot 2 Section 24.1.2.4, "Precise Recovery with JMS"NoNoHighI decided to go for the lightweight queue trimming option.  This means I won’t lose any events, but I may emit a few duplicate events in the event of primary failure.  This setting causes all output events to be buffered by secondary's until they are told by the primary that a particular event has been emitted.  To configure this option I add the following adapter to my EPN:    <wlevs:adapter id="HAOutputAdapter" provider="ha-broadcast">        <wlevs:listener ref="OutboundQueueAdapter" />        <wlevs:listener ref="PrintBean" />        <wlevs:instance-property name="keyProperties" value="timestamp"/>        <wlevs:instance-property name="monotonic" value="true"/>        <wlevs:instance-property name="totalOrder" value="false"/>    </wlevs:adapter>This uses the time of the alert (timestamp property) as the key to be used to identify events which have been trimmed.  This works in this application because the alert time is the time of the source event, and the time of the source events are synchronized using the HA Input Adapter.  Because this is a time value then it will increase, and so I set monotonic=”true”.  However I may get two alerts raised at the same timestamp and in that case I set totalOrder=”false”.I also added the additional configuration to config.xml for the application:<ha:ha-broadcast-adapter>    <name>HAOutputAdapter</name>    <warm-up-window-length units="seconds">15</warm-up-window-length>    <trimming-interval units="millis">1000</trimming-interval></ha:ha-broadcast-adapter>This causes the primary to tell the secondary's which is its latest emitted alert every 1 second.  This will cause the secondary's to trim from their buffers all alerts prior to and including the latest emitted alerts.  So in the worst case I will get one second of duplicated alerts.  It is also possible to set a number of events rather than a time period.  The trade off here is that I can reduce synchronization overhead by having longer time intervals or more events, causing more memory to be used by the secondary's or I can cause more frequent synchronization, using less memory in the secondary's and generating fewer duplicate alerts but there will be more communication between the primary and the secondary's to trim the buffer.The warm-up window is used to stop a secondary joining the cluster before it has been running for that time period.  The window is based on the time that the EPN needs to be running to be have the same state as the other servers.  In our example application we have a CQL that runs for a period of 10 seconds, so I set the warm up window to be 15 seconds to ensure that a newly started server had the same state as all the other servers in the cluster.  The warm up window should be greater than the longest query window.Adding an External Coherence ClusterWhen we are running OEP as a cluster then we have additional overhead in the servers.  The HA Input Adapter is synchronizing event time across the servers, the HA Output adapter is synchronizing output events across the servers.  The HA Output adapter is also buffering output events in the secondary’s.  We can’t do anything about this but we can move the Coherence Cache we are using outside of the OEP servers, reducing the memory pressure on those servers and also moving some of the processing outside of the server.  Making our Coherence caches external to our OEP cluster is a good idea for the following reasons:Allows moving storage of cache entries outside of the OEP server JVMs hence freeing more memory for storing CQL state. Allows storage of more entries in the cache by scaling cache independently of the OEP cluster. Moves cache processing outside OEP servers.To create the external Coherence cache do the following:Create a new directory for our standalone Coherence servers, perhaps at the same level as the OEP domain directory. Copy the tangosol-coherence-override.xml file previously created for the OEP cluster into a config directory under the Coherence directory created in the previous step. Copy the coherence-cache-config.xml file from the application into a config directory under the Coherence directory created in the previous step. Add the following to the tangosol-coherence-override.xml file in the Coherence config directory: <coherence>    <cluster-config>        <member-identity>            <cluster-name>oep_cluster</cluster-name>            <member-name>Grid1</member-name>        </member-identity>        …    </cluster-config></coherence> Important Note: The <cluster-name> must match the name of the OEP cluster as defined in the <domain><name> element in the event servers config.xml. The member name is used to help identify the server.Disable storage for our caches in the event servers by editing the coherence-cache-config.xml file in the application and adding the following element to the caches: <distributed-scheme>    <scheme-name>DistributedCacheType</scheme-name>    <service-name>DistributedCache</service-name>    <backing-map-scheme>        <local-scheme/>    </backing-map-scheme>    <local-storage>false</local-storage></distributed-scheme> The local-storage flag stops the OEP server from storing entries for caches using this cache schema. Do not disable storage at the global level (-Dtangosol.coherence.distributed.localstorage=false) because this will disable storage on some OEP specific cache schemes as well as our application cache.  We don’t want to put those schemes into our cache servers because they are used by OEP to maintain cluster integrity and have only one entry per application per server, so are very small.  If we put those into our Coherence Cache servers we would have to add OEP specific libraries to our cache servers and enable them in our coherence-cache-config.xml, all of which is too much trouble for little or no benefit.If using Unicast Discovery (this section is not required if using Multicast) then we want to make the Coherence Grid be the Well Known Address servers because we want to disable storage of entries on our OEP servers, and Coherence nodes with storage disabled cannot initialize a cluster.  To enable the Coherence servers to be primaries in the Coherence grid do the following: Change the unicast-listener addresses in the Coherence servers tangosol-coherence-override.xml file to be suitable values for the machine they are running on – typically change the listen address. Modify the WKA addresses in the OEP servers and the Coherence servers tangosol-coherence-override.xml file to match at least two of the Coherence servers listen addresses. The following table shows how this might be configured for 2 OEP servers and 2 Cache serversOEP Server 1OEP Server 2Cache Server 1Cache Server 2<?xml version='1.0'?><coherence>  <cluster-config>    <unicast-listener>      <well-known-addresses>        <socket-address id="1">          <address>            192.168.56.91          </address>          <port>9300</port>        </socket-address>        <socket-address id="2">          <address>            192.168.56.92          </address>          <port>9300</port>        </socket-address>      </well-known-addresses>      <address>        192.168.56.91      </address>      <port>9200</port>    </unicast-listener>  </cluster-config></coherence><?xml version='1.0'?><coherence>  <cluster-config>    <unicast-listener>      <well-known-addresses>        <socket-address id="1">          <address>            192.168.56.91          </address>          <port>9300</port>        </socket-address>        <socket-address id="2">          <address>            192.168.56.92          </address>          <port>9300</port>        </socket-address>      </well-known-addresses>      <address>        192.168.56.92      </address>      <port>9200</port>    </unicast-listener>  </cluster-config></coherence><?xml version='1.0'?><coherence>  <cluster-config>    <member-identity>      <cluster-name>        oep_cluster      </cluster-name>      <member-name>        Grid1      </member-name>    </member-identity>    <unicast-listener>      <well-known-addresses>        <socket-address id="1">          <address>            192.168.56.91          </address>          <port>9300</port>        </socket-address>        <socket-address id="2">          <address>            192.168.56.92          </address>          <port>9300</port>        </socket-address>      </well-known-addresses>      <address>        192.168.56.91      </address>      <port>9300</port>    </unicast-listener>  </cluster-config></coherence><?xml version='1.0'?><coherence>  <cluster-config>    <member-identity>      <cluster-name>        oep_cluster      </cluster-name>      <member-name>        Grid2      </member-name>    </member-identity>    <unicast-listener>      <well-known-addresses>        <socket-address id="1">          <address>            192.168.56.91          </address>          <port>9300</port>        </socket-address>        <socket-address id="2">          <address>            192.168.56.92          </address>          <port>9300</port>        </socket-address>      </well-known-addresses>      <address>        192.168.56.92      </address>      <port>9300</port>    </unicast-listener>  </cluster-config></coherence>Note that the OEP servers do not listen on the WKA addresses, using different port numbers even though they run on the same servers as the cache servers. Also not that the Coherence servers are the ones that listen on the WKA addresses.Now that the configuration is complete we can create a start script for the Coherence grid servers as follows: #!/bin/shMW_HOME=/home/oracle/fmwOEP_HOME=${MW_HOME}/ocep_11.1JAVA_HOME=${MW_HOME}/jrockit_160_33CACHE_SERVER_HOME=${MW_HOME}/user_projects/domains/oep_coherenceCACHE_SERVER_CLASSPATH=${CACHE_SERVER_HOME}/HADemoCoherence.jar:${CACHE_SERVER_HOME}/configCOHERENCE_JAR=${OEP_HOME}/modules/com.tangosol.coherence_3.7.1.6.jarJAVAEXEC=$JAVA_HOME/bin/java# specify the JVM heap sizeMEMORY=512mif [[ $1 == '-jmx' ]]; then    JMXPROPERTIES="-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true"    shiftfiJAVA_OPTS="-Xms$MEMORY -Xmx$MEMORY $JMXPROPERTIES"$JAVAEXEC -server -showversion $JAVA_OPTS -cp "${CACHE_SERVER_CLASSPATH}:${COHERENCE_JAR}" com.tangosol.net.DefaultCacheServer $1 Note that I put the tangosol-coherence-override and the coherence-cache-config.xml files in a config directory and added that directory to my path (CACHE_SERVER_CLASSPATH=${CACHE_SERVER_HOME}/HADemoCoherence.jar:${CACHE_SERVER_HOME}/config) so that Coherence would find the override file. Because my application uses in-cache processing (entry processors) I had to add a jar file containing the required classes for the entry processor to the classpath (CACHE_SERVER_CLASSPATH=${CACHE_SERVER_HOME}/HADemoCoherence.jar:${CACHE_SERVER_HOME}/config). The classpath references the Coherence Jar shipped with OEP to avoid versoin mismatches (COHERENCE_JAR=${OEP_HOME}/modules/com.tangosol.coherence_3.7.1.6.jar). This script is based on the standard cache-server.sh script that ships with standalone Coherence. The –jmx flag can be passed to the script to enable Coherence JMX management beans.We have now configured Coherence to use an external data grid for its application caches.  When starting we should always start at least one of the grid servers before starting the OEP servers.  This will allow the OEP server to find the grid.  If we do start things in the wrong order then the OEP servers will block waiting for a storage enabled node to start (one of the WKA servers if using Unicast).SummaryWe have now created an OEP cluster that makes use of an external Coherence grid for application caches.  The application has been modified to ensure that the timestamps of arriving events are synchronized and the output events are only output by one of the servers in the cluster.  In event of failure we may get some duplicate events with our configuration (there are configurations that avoid duplicate events) but we will not lose any events.  The final version of the application with full HA capability is shown below:FilesThe following files are available for download:Oracle Event Processing Includes CoherenceNone-HA version of application Includes test file TestData.csv and Load Test property file HADemoTest.prop Includes Server.properties.Antony file to customize to point to your WLS installationHA version of application Includes test file TestData.csv and Load Test property file HADemoTest.prop Includes Server.properties.Antony file to customize to point to your WLS installationOEP Cluster Files Includes config.xml Includes tangosol-coherence-override.xml Includes Server.properties that will need customizing for your WLS environmentCoherence Cluster Files Includes tangosol-coherence-override.xml and coherence-cache-configuration.xml includes cache-server.sh start script Includes HADemoCoherence.jar with required classes for entry processorReferencesThe following references may be helpful:Oracle Complex Event Processing High Availability White Paper Additional background reading with some good explanations.Oracle® Fusion Middleware Administrator's Guide for Oracle Event Processing Administering Multi-Server Domains With Oracle Coherence Introduction to Multi-Server Domains Deploying Applications to Multi-Server DomainsOracle® Fusion Middleware Developer's Guide for Oracle Event Processing Testing Applications With the Load Generator and csvgen Adapter Developing Applications for High Availability Schema Reference: Server Configuration wlevs_server_config.xsdOracle® CEP CQL Language Reference Oracle Fusion Middleware Java API Reference for Oracle Event Processing Class ConfigurationPropertyPlaceholderConfigurerOracle® Coherence Developer's Guide Configuring Multicast Communication Specifying a Cluster Member's Unicast Address Using Well Known Addresses Configuring Caches Operational Configuration Elements well-known-addressesCache Configuration Elements distributed-scheme

Setting up an Oracle Event Processing Cluster Recently I was working with Oracle Event Processing (OEP) and needed to set it up as part  of a high availability cluster.  OEP uses Coherence for quorum...

SOA Suite

Clear Day for Cloud Adapters

salesforce.com Adapter ReleasedYesterday Oracle released their cloud adapter for salesforce.com (SFDC) so I thought I would talk a little about why you might want it.  I had previously integrated with SFDC using BPEL and the SFDC web interface, so in this post I will explore why the adapter might be a better approach.Why?So if I can interface to SFDC without the adapter why would I spend money on the adapter?  There are a number of reasons and in this post I will just explain the following 3 benefits:Auto-Login Non-Ploymorphic Operations Operation WizardsLets take each one in turn.Auto-LoginThe first obvious benefit is how you connect and make calls to SFDC.  To perform an operation such as query an account or update an address the SFDC interface requires you to do the following:Invoke a login method which returns a session ID to placed in the header on all future calls and the actual endpoint to call. Invoke the actual operation using the provided endpoint and passing the session ID provided. When finished with calls invoke the logout operation.Now these are not unreasonable demands.  The problem comes when you try to implement this interface.Before calling the login method you need the credentials.  These need to be obtained from somewhere, I set them as BPEL preferences but there are other ways to store them.  Calling the login method is not a problem but you need to be careful in how you make subsequent calls.First all subsequent calls must override the endpoint address with the one returned from the login operation.  Secondly the provided session ID must be placed into a custom SOAP header.  So you have to copy the session ID into a custom SOAP Header and provide that header to the invoke operation.  You also have to override the endpointURI property in the invoke with provided endpoint.Finally when you have finished performing operations you have to logout.In addition to the number of steps you have to code there is the problem of knowing when to logout.  The simplest thing to do is for each operation you wish to perform execute the login operatoin folloed y the actual working operation and then do a logout operation.  The trouble with this is that you are now making 3 calls every time you want to perform an operation against SFDC.  This causes additional latency in processing the request.The adapter hides all this from you, hiding the login/logout operations and allowing connections to  be re-used, reducing the number of logins required.  The adapter makes the SFDC call look like a call to any other web service while the adapter uses a session cache to avoid repeated logins.Non-Polymorphic OperationsThe standard operations in the SFDC interface provide a base object return type, the sObject.  This could be an Account or a Campaign for example but the operations always return the base sObject type, leaving it to the client to make sure they process the correct return type.  Similarly requests also use polymorphic data types.  This often requires in BPEL that the sObject returned is copied to a variable of a more specific type to simplify processing of the data.  If you don’t do this then you can still query fields within the specific object but the SOA tooling cannot check it for you.The adapter identifies the type of request and response and also helps build the request for you with bind parameters.  This means that you are able to build your process to actually work with the real data structures, not the abstract ones.  This is another big benefit in my view!Operation WizardsThe SFDC API is very powerful.  Translation: the SFDC API is very complex.  With great power comes complexity to paraphrase Uncle Ben (amongst others).  The adapter groups the operations into logical collections and then provides additional help in selecting from within those collections of operations and providing the correct parameters for them.InstallingInstallation takes place in two parts.  The Design time is deployed into JDeveloper and the run time is deployed into the SOA Suite Oracle Home.  The adapter is available for download here and the installation instructions and documentation are here.  Note that you will need OPatch to install the adapter.  OPatch can be downloaded from Oracle Support Patch 6880880.  Don’t use the OPatch that ships with JDeveloper and SOA Suite.  If you do you may see an error like:Uncaught exceptionoracle.classloader.util.AnnotatedNoClassDefFoundError:       Missing class: oracle.tip.tools.ide.adapters.cloud.wizard.CloudAdapterWizardYou will want the OPatch 11.1.0.x.x.  Make sure you download the correct 6880880, it must be for 11.1.x as that is the version of JDeveloper and SOA Suite and it must be for the platform you are running on.Apart from getting the right OPatch the installation is very straight forward.So don’t be afraid of SFDC integration any more, cloud integratoin is clear with the SFDC adapter.

salesforce.com Adapter Released Yesterday Oracle released their cloud adapter for salesforce.com (SFDC) so I thought I would talk a little about why you might want it.  I had previously integrated with...

SOA Suite

Going Native with JCA Adapters

Formatting JCA Adapter Binary ContentsSometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format.  Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.Adapter CreationWhen creating most adapters (application & DB being the exceptions) you have the option of choosing the message format.  By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.This results in a WSDL message defined as shown below:<wsdl:types><schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/"        xmlns="http://www.w3.org/2001/XMLSchema" >  <element name="opaqueElement" type="base64Binary" /></schema></wsdl:types><wsdl:message name="Write_msg">    <wsdl:part name="opaque" element="opaque:opaqueElement"/></wsdl:message>The ChallengeThe challenge now is to convert out data into a base-64 encoded string.  For this we have to turn to the service bus and MFL.Within the service bus we use the MFL editor to define the format of the binary data.  In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.The example below shows a sample MFL file to describe the above data structure:<?xml version='1.0' encoding='windows-1252'?><!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'><!--   Enter description of the message format here.   --><MessageFormat name='BinaryMessageFormat' version='2.02'>    <FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>        <LenField type='UTinyInt'/>    </FieldFormat>    <FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>    <FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>    <FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>        <LenField type='UTinyInt'/>    </FieldFormat></MessageFormat>Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).I also created an XML version of the MFL that can be used in interfaces.The XML version can then be imported into a WSDL document to create a web service.Full Steam AheadWe now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:We receive the XML request, in the sample code, the sample delivers it as a web service. We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar). We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar). We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var). Finally we replace the original $body contents with the output of an XQuery that matches the adapter expected XML format.The diagram below shows the OSB pipeline that implements the above.A WrinkleUnfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call.  The below Java code was used to provide this wrapper:package antony.blog;import javax.xml.bind.DatatypeConverter;public class Base64Encoder {    public static String base64encode(byte[] content) {        return DatatypeConverter.printBase64Binary(content);    }    public static byte[] base64decode(String content) {        return DatatypeConverter.parseBase64Binary(content);    }}Wrapping UpSample code is available here and consists of the following projects:BinaryAdapter – JDeveloper SOA Project that defines the JCA File AdapterOSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverterBinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.The OSB project needs to be customized to have the logical directory name point to something sensible.  The project can be tested using the normal OSB console test screen.The following sample input (note 16909060 is 0x01020304)<bin:OutputMessage xmlns:bin="http://www.example.org/BinarySchema">    <bin:stringField1>First String</bin:stringField1>    <bin:intField>16909060</bin:intField>    <bin:doubleField>1.5</bin:doubleField>    <bin:stringField2>Second String</bin:stringField2></bin:OutputMessage>Generates the following binary data file – displayed using “hexdump –C”.  The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.$ hexdump -C 2.bin 00000000  0c 46 69 72 73 74 20 53  74 72 69 6e 67 04 03 02  |.First String...|00000010  01 00 00 00 00 00 00 f8  3f 0d 53 65 63 6f 6e 64  |........?.Second|00000020  20 53 74 72 69 6e 67                              | String|Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint.  Similarly the source of the data could be anything.  The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary.

Formatting JCA Adapter Binary Contents Sometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary...

SOA Suite

List Manipulation in Rules

Generating Lists from RulesRecently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.The ScenarioFor the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.For the purpose of this blog I want to validate the following:A residential surcharge is applied to residential addresses.A residential surcharge is not applied to non-residential addresses.The package is of the correct weight for the type of package.The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.Creating the Return ListWe need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.Adding a Violation to the ListIf a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.When a rule fires then it just necessary to call the function to add the violation to the list.In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.How it WorksEach time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.What Doesn’t WorkA return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.AlternativeInstead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.DownloadHope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

Generating Lists from Rules Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations,...

Development

Cleaning Up After Yourself

Maintaining a Clean SOA Suite Test EnvironmentFun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.Cleaning Up SOA EnvironmentMy first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.#!/bin/sh# Truncate the SOA schemas, does not truncate BAM.# Use only in development and test, not production.# Properties to be set before running script# SOAInfra Database SIDDB_SID=orcl# SOA DB PrefixSOA_PREFIX=DEV# SOAInfra DB passwordSOAINFRA_PASSWORD=welcome1# SOA Home DirectorySOA_HOME=/u01/app/fmw/Oracle_SOA1# Set DB Environment. oraenv << EOF${DB_SID}EOF# Run Truncate script from directory it lives incd ${SOA_HOME}/rcu/integration/soainfra/sql/truncate# Run the truncate scriptsqlplus ${SOA_PREFIX}_soainfra/${SOAINFRA_PASSWORD} @truncate_soa_oracle.sql << EOFexitEOFAfter running this script all your SOA composite instances and associated workflow instances will be gone.Cleaning Up BAMThe above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:#!/bin/sh# Set software locationsFMW_HOME=/home/oracle/fmwexport JAVA_HOME=${FMW_HOME}/jdk1.7.0_17BAM_CMD=${FMW_HOME}/Oracle_SOA1/bam/bin/icommand# Set objects to purgeBAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation# Clean up BAMfor name in ${BAM_OBJECTS}do  ${BAM_CMD} -cmd clear -name ${name} -type dataobjectdoneAfter running this script all the rows of the listed objects will be gone.Ready for InspectionUnlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

Maintaining a Clean SOA Suite Test Environment Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks. I don’t have a sorcerers...

Fusion Middleware

Postscript on Scripts

More Scripts for SOA SuiteOver time I have evolved my startup scripts and thought it would be a good time to share them.  They are available for download here.  I have finally converted to using WLST, which has a number of advantages.  To me the biggest advantage is that the output and log files are automatically written to a consistent location in the domain directory or node manager directory.  In addition the WLST scripts wait for the component to start and then return, this lets us string commands together without worrying about the dependencies.The following are the key scripts (available for download here):ScriptDescriptionPre-ReqsStops whenTask CompletestartWlstNodeManager.shStarts Node Manager using WLSTNoneYesstartNodeManager.shStarts Node ManagerNoneYesstopNodeManager.shStops Node Manager using WLSTNode Manager runningYesstartWlstAdminServer.shStarts Admin Server using WLSTNode Manager runningYesstartAdminServer.shStarts Admin ServerNoneNostopAdminServer.shStops Admin ServerAdmin Server runningYesstartWlstManagedServer.shStarts Managed Server using WLSTNode Manager runningYesstartManagedServer.shStarts Managed ServerNoneNostopManagedServer.shStops Managed ServerAdmin Server runningYesSamplesTo start Node Manager and Admin ServerstartWlstNodeManager.sh ; startWlstAdminServer.shTo start Node Manager, Admin Server and SOA ServerstartWlstNodeManager.sh ; startWlstAdminServer.sh ; startWlstManagedServer soa_server1Note that the Admin server is not started until the Node Manager is running, similarly the SOA server is not started until the Admin server is running.Node Manager ScriptsstartWlstNodeManager.shUses WLST to start the Node Manager.  When the script completes the Node manager will be running.startNodeManager.shThe Node Manager is started in the background and the output is piped to the screen. This causes the Node Manager to continue running in the background if the terminal is closed. Log files, including a .out capturing standard output and standard error, are placed in the <WL_HOME>/common/nodemanager directory, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the Node Manager.stopNodeManager.shUses WLST to stop the Node Manager.  When the script completes the Node manager will be stopped.Admin Server ScriptsstartWlstAdminServer.shUses WLST to start the Admin Server.  The Node Manager must be running before executing this command.  When the script completes the Admin Server will be running.startAdminServer.shThe Admin Server is started in the background and the output is piped to the screen. This causes the Admin Server to continue running in the background if the terminal is closed.  Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find.  This script pipes the output of the log file to the screen and keeps doing this until terminated,  Terminating the script does not terminate the server.stopAdminServer.shStops the Admin Server.  When the script completes the Admin Server will no longer be running.Managed Server ScriptsstartWlstManagedServer.sh <MANAGED_SERVER_NAME>Uses WLST to start the given Managed Server. The Node Manager must be running before executing this command. When the script completes the given Managed Server will be running.startManagedServer.sh <MANAGED_SERVER_NAME>The given Managed Server is started in the background and the output is piped to the screen. This causes the given Managed Server to continue running in the background if the terminal is closed. Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the server.stopManagedServer.sh <MANAGED_SERVER_NAME>Stops the given Managed Server. When the script completes the given Managed Server will no longer be running.Utility ScriptsThe following scripts are not called directly but are used by the previous scripts._fmwenv.shThis script is used to provide information about the Node Manager and WebLogic Domain and must be edited to reflect the installed FMW environment, in particular the following values must be set:DOMAIN_NAME – the WebLogic domain name. NM_USERNAME – the Node Manager username. NM_PASSWORD – the Node Manager password. MW_HOME – the location where WebLogic and other FMW components are installed.WEBLOGIC_USERNAME – the WebLogic Administrator username.WEBLOGIC_PASSWORD - the WebLogic Administrator password.The following values may also need changing:ADMIN_HOSTNAME – the server where AdminServer is running.ADMIN_PORT – the port number of the AdminServer.DOMAIN_HOME – the location of the WebLogic domain directory, defaults to ${MW_HOME}/user_projects/domains/${DOMAIN_NAME} NM_LISTEN_HOST – the Node Manager listening hostname, defaults to the hostname of the machine it is running on. NM_LISTEN_PORT – the Node Manager listening port._runWLst.shThis script runs the WLST script passed in environment variable ${SCRIPT} and takes its configuration from _fmwenv.sh.  It dynamically builds a WLST properties file in the /tmp directory to pass parameters into the scripts.  The properties filename is of the form <DOMAIN_NAME>.<PID>.properties._runAndLog.shThis script runs the command passed in as an argument, writing standard out and standard error to a log file.  The log file is rotated between invocations to avoid losing the previous log files.  The log file is then tailed and output to the screen.  This means that this script will never finish by itself.WLST ScriptsThe following WLST scripts are used by the scripts above, taking their properties from /tmp/<DOMAIN_NAME>.<PID>.properties:startNodeManager.py stopNodeManager.py startServer.pystartServerNM.pyRelationshipsThe dependencies and relationships between my scripts and the built in scripts are shown in the diagram below.

More Scripts for SOA Suite Over time I have evolved my startup scripts and thought it would be a good time to share them.  They are available for download here.  I have finally converted to using...

Fusion Middleware

Thanks for the Memory

Controlling Memory in Oracle SOA SuiteWithin WebLogic you can specify the memory to be used by each managed server in the WebLogic console.  Unfortunately if you create a domain with Oracle SOA Suite it adds a new config script, setSOADomainEnv.sh, that overwrites any USER_MEM_ARGS passed in to the start scripts.  setDomainEnv.sh only sets a single set of memory arguments that are used by all servers, so an admin server gets the same memory parameters as a BAM server.  This means some servers will have more memory than they need and others will be too tightly constrained.  This is a bad thing.A SolutionTo overcome this I wrote a small script, setMem.sh, that checks to see which server is being started and then sets the memory accordingly.  It supports up to 9 different server types, each identified by a prefix.  If the prefix matches then the max and min heap and max and min perm gen are set.  Settings are controlled through variables set in the script,  If using JRockit then leave the PERM settings empty and they will not be set.SRV1_PREFIX=AdminSRV1_MINHEAP=768mSRV1_MAXHEAP=1280mSRV1_MINPERM=256mSRV1_MAXPERM=512mThe above settings match any server whose name starts Admin.  The above will result in the following USER_MEM_ARGS valueUSER_MEM_ARGS=-Xms768m -Xmx1280m -XX:PermSize=256m -XX:MaxPermSize=512mIf the prefix were soa then it would match soa_server1, soa_server2 etc.There is a set of DEFAULT_ values that allow you to default the memory settings and only set them explicitly for servers that have settings different from your default.  Note that there must still be a matching prefix, otherwise the USER_MEM_ARGS will be the ones from setSOADomainEnv.sh.This script needs to be called from the setDomainEnv.sh.  Add the call immediately before the following lines:if [ "${USER_MEM_ARGS}" != "" ] ; then        MEM_ARGS="${USER_MEM_ARGS}"        export MEM_ARGSfiThe script can be downloaded here.Thoughts on Memory SettingI know conventional wisdom is that Xms (initial heap size) and Xmx (max heap size) should be set the same to avoid needing to allocate memory after startup.  However I don’t agree with this.  Setting Xms==Xmx works great if you are omniscient, I don’t claim to be omniscient, my omni is a little limited, so I prefer to set Xms to what I think the server needs, this is what I would set it to if I was setting the parameters to be Xms=Xmx.  However I like to then set Xmx to be a little higher than I think the server needs.  This allows me to get the memory requirement wrong and not have my server fail due to out of memory.  In addition I can now monitor the server and if the heap memory usage goes above my Xms setting I know that I calculated wrong and I can change the Xms setting and restart the servers at a suitable time.  Setting them not equal buys be time to fix increased memory needs, for exampl edue to change in usage patterns or new functionality, it also allows me to be less than omniscient.ps Please don’t tell my children I am not omniscient!

Controlling Memory in Oracle SOA Suite Within WebLogic you can specify the memory to be used by each managed server in the WebLogic console.  Unfortunately if you create a domain with Oracle SOA Suite...

Development

Share & Enjoy : Using a JDeveloper Project as an MDS Store

Share & Enjoy : Sharing Resources through MDSOne of my favorite radio shows was the Hitchhikers Guide to the Galaxy by the sadly departed Douglas Adams.  One of the characters, Marvin the Paranoid Android, was created by the Sirius Cybernetics Corporation whose corporate song was entitled Share and Enjoy!  Just like using the products of the Sirius Cybernetics Corporation, reusing resources through MDS is not fun, but at least it is useful and avoids some problems in SOA deployments.  So in this blog post I am going to show you how to re-use SOA resources stored in MDS using JDeveloper as a development tool.The PlanWe would like to have some SOA resources such as WSDLs, XSDs, Schematron files, DVMs etc. stored in a shared location.  This gives us the following benefitsSingle source of truth for artifacts Remove cross composite dependencies which can cause deployment and startup problems Easier to find and reuse resources if stored in a single locationSo we will store a WSDL and XSD in MDS, using a JDeveloper project to maintain the shared artifact and using File based MDS to access it from development and Database based MDS to access it from runtime.  We will create the shared resources in a JDeveloper project and deploy them to MDS.  We will then deploy a project that exposes a service based on the WSDL.  Finally we will deploy a client project to the previous project that uses the same MDS resources.Creating Shared Resources in a JDeveloper ProjectFirst lets create a JDeveloper project and put our shared resources into that project.  To do thisIn a JDeveloper Application create a New Generic Project (File->New->All Technologies->General->Generic Project) In that project create a New Folder called apps (File->New->All Technologies->General->Folder) – It must be called apps for local File MDS to work correctly. In the project properties delete the existing Java Source Paths (Project Properties->Project Source Paths->Java Source Paths->Remove) In the project properties a a new Java Source Path pointing to the just created apps directory (Project Properties->Project Source Paths->Java Source Paths->Add)Having created the project we can now put our resources into that project, either copying them from other projects or creating them from scratch.Create a SOA Bundle to Deploy to a SOA InstanceHaving created our resources we now want to package them up for deployment to a SOA instance.  To do this we take the following steps.Create a new JAR deployment profile (Project Properties->Deployment->New->Jar File) In JAR Options uncheck the Include Manifest File In File Groups->Project Output->Contributors uncheck all existing contributors and check the Project Source Path Create a new SOA Bundle deployment profile (Application Properties->Deployment->New->SOA Bundle) In Dependencies select the project jar file from the previous steps. On Application Properties->Deployment unselect all options.The bundle can now be deployed to the server by selecting Deploy from the Application Menu.Create a Database Based MDS Connection in JDeveloperHaving deployed our shared resources it would be good to check they are where we expect them to be so lets create a Database Based MDS Connection in JDeveloper to let us browse the deployed resources.Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection) Make the Connection Type DB Based MDS and choose the database Connection and parition.  The username of the connection will be the <PREFIX>_mds user and the MDS partition will be soa-infra.Browse the repository to make sure that your resources deplyed correctly under the apps folder.  Note that you can also use this browser to look at deployed composites.  You may find it intersting to look at the /deployed-composites/deployed-composites.xml file which lists all deployed composites.Create an File Based MDS Connection in JDeveloperWe can now create a File based MDS connection to the project we just created.  A file based MDS connection allows us to work offline without a database or SOA server.  We will create a file based MDS that actually references the project we created earlier.Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection) Make the Connection Type File Based MDS and choose the MDS Root Folder to be the location of the JDeveloper project previously created (not the source directory, the top level project directory).We can browse the file based MDS using the IDE Connections Window in JDeveloper.  This lets us check that we can see the contents of the repository.Using File Based MDSNow that we have MDS set up both in the database and locally in the file system we can try using some resources in a composite.  To use a WSDL from the file based repository:Insert a new Web Service Reference or Service onto your composite.xml. Browse the Resource Palette for the WSDL in the File Based MDS connection and import it. Do not copy the resource into the project. If you are creating a reference, don’t worry about the warning message, that can be fixed later.  Just say Yes you do want to continue and create the reference.Note that when you import a resource from an MDS connection it automatically adds a reference to that MDS into the applications adf-config.xml.  SOA applications do not deploy their adf-config.xml, they use it purely to help resolve oramds protocol references in SOA composites at design time.  At runtime the soa-infra applications adf-config.xml is used to help resolve oramds protocol references.The reason we set file based MDS to point to the project directory rather than the apps directory underneath is because when we deploy SOA resources to MDS as a SOA bundle the resources are all placed under the apps MDS namespace.  To make sure that our file based MDS includes an apps namespace we have to rename the src directory to be apps and then make sure that our file based MDS points to the directory aboive the new source directory.Patching Up ReferencesWhen we use an abstract WSDL as a service then the SOA infrastructure automatically adds binging and service information at run time.  An abstract WSDL used as a reference needs to have binding and service information added in order to compile successfully.  By default the imported MDS reference for an abstract WSDL will look like this:<reference name="Service3"   ui:wsdlLocation="oramds:/apps/shared/WriteFileProcess.wsdl">  <interface.wsdl interface="http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess#wsdl.interface(WriteFileProcess)"/>  <binding.ws port="" location=""/></reference>Note that the port and location properties of the binding are empty.  We need to replace the location with a runtime WSDL location that includes binding information, this can be obtained by getting the WSDL URL from the soa-infra application or from EM.  Be sure to remove any MDS instance strings from the URL.The port information is a little more complicated.  The first part of the string should be the target namespace of the service, usually the same as the first part of the interface attribute of the interface.wsdl element.  This is followed by a #wsdl.endpoint and then in parenthesis the service name from the runtime WSDL and port name from the WSDL, separated by a /.  The format should look like this:{Service Namespace}#wsdl.endpoint({Service Name}/{Port Name})So if we have a WSDL like this:<wsdl:definitions   …   targetNamespace=   "http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess">   …   <wsdl:service name="writefileprocess_client_ep">      <wsdl:port name="WriteFileProcess_pt"            binding="client:WriteFileProcessBinding">         <soap:address location=… />      </wsdl:port>   </wsdl:service></wsdl:definitions>Then we get a binding.ws port like this:http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess#wsdl.endpoint(writefileprocess_client_ep/WriteFileProcess_pt)Note that you don’t have to set actual values until deployment time.  The following binding information will allow the composite to compile in JDeveloper, although it will not run in the runtime:<binding.ws port="dummy#wsdl.endpoint(dummy/dummy)" location=""/>The binding information can be changed in the configuration plan.  Deferring this means that you have to have a configuration plan in order to be able to invoke the reference and this means that you reduce the risk of deploying composites with references that are pointing to the wrong environment.SummaryIn this blog post I have shown how to store resources in MDS so that they can be shared between composites.  The resources can be created in a JDeveloper project that doubles as an MDS file repository.  The MDS resources can be reused in composites.  If using an abstract WSDL from MDS I have also shown how to fix up the binding information so that at runtime the correct endpoint can be invoked.  Maybe it is more fun than dealing with the Sirius Cybernetics Corporation!

Share & Enjoy : Sharing Resources through MDS One of my favorite radio shows was the Hitchhikers Guide to the Galaxy by the sadly departed Douglas Adams.  One of the characters, Marvin the Paranoid...

Development

Multiple SOA Developers Using a Single Install

Running Multiple SOA Developers from a Single InstallA question just came up about how to run multiple developers from a single software install.  The objective is to have a single software installation on a shared server and then provide different OS users with the ability to create their own domains.  This is not a supported configuration but it is attractive for a development environment.Out of the BoxBefore we do anything special lets review the basic installation.Oracle WebLogic Server 10.3.6 installed using oracle user in a Middleware Home Oracle SOA Suite 11.1.1.7 installed using oracle user Software installed with group oinstall Developer users dev1, dev2 etc Each developer user is a member of oinstall group and has access to the Middleware Home.CustomizationsTo get this to work I did the following customizationIn the Middleware Home make all user readable files/directories group readable and make all user executable files/directories group executable. find $MW_HOME –perm /u+r ! –perm /g+r | xargs –Iargs chmod g+r args find $MW_HOME –perm /u+x ! –perm /g+x | xargs –Iargs chmod g+x argsDomain CreationWhen creating a domain for a developer note the following:Each developer will need their own FMW repository, perhaps prefixed by their username, e.g. dev1, dev2 etc. Each developer needs to use a unique port number for all WebLogic channels Any use of Coherence should use Well Known Addresses to avoid cross talk between developer clusters (note SOA and OSB both use Coherence!) If using Node Manager each developer will need their own instance, using their own configuration.

Running Multiple SOA Developers from a Single Install A question just came up about how to run multiple developers from a single software install.  The objective is to have a single software...

SOA Suite

Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Book: Getting Started with Oracle SOA B2B Integration: A hands On TutorialBefore OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” sectionNot being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting...

Fusion Middleware

Oracle SOA Suite 11g Performance Tuning Cookbook

Just received this to review.It’s a Java WorldThe first chapter identifies tools and methods to identify performance bottlenecks, generally covering low level JVM and database issues.  Useful material but not really SOA specific and the authors I think missed the opportunity to share the knowledge they obviously have of how to relate these low level JVM measurements into SOA causes.Chapter 2 uses the EMC Hyperic tool to monitor SOA Suite and so this chapter may be of limited use to many readers.  Many but not all of the recipes could have been accomplished using the FMW Control that ships and is included in the license of SOA Suite.  One of the recipes uses DMS, which is the built in FMW monitoring system built by Oracle before the acquisition of BEA.  Again this seems to be more about Hyperic than SOA Suite.Chapter 3 covers performance testing using Apache JMeter.  Like the previous chapters there is very little specific to SOA Suite, indeed in my experience many SOA Suite implementations do not have a Web Service to initiate composites, instead relying on adapters.Chapter 4 covers JVM memory management, this is another good general Java section but has little SOA specifics in it.Chapter 5 is yet more Java tuning, in this case generic garbage collection tuning.  Like the earlier chapters, good material but not very SOA specific.  I can’t help feeling that the authors could have made more connections with SOA Suite specifics in their recipes.Chapter 6 is called platform tuning, but it could have been titled miscellaneous tuning.  This includes a number of Linux optimizations, WebLogic optimizations and JVM optimizations.  I am not sure that yet another explanation of how to create a boot.properties file was needed.Chapter 7 homes in on JMS & JDBC tuning in WebLogic.SOA at LastChapter 8 finally turns to SOA specifics, unfortunately the description of what dispatcher invoke threads do is misleading, they only control the number of threads retrieving messages from the request queue, synchronous web service calls do not use the request queue and hence do not use these threads.  Several of the recipes in this chapter do more than alter the performance characteristics, they also alter the semantics of the BPEL engine (such as “Changing a BPEL process to be transient”) and I wish there was more discussion of the impacts of these in the chapter.  I didn’t see any reference to the impact on recoverability of processes when turning on in-memory message delivery.  That said the recipes do cover a lot of useful optimizations, and if used judiciously will cause a boost in performance.Chapter 9 covers optimizing the Mediator, primarily tweaking Mediator threading.  THe descriptions of the impacts of changes in this chapter are very good, and give some helpful indications on whether they will apply to your environment.Chapter 10 touches very lightly on Rules and Human Workflow, this chapter would have benefited from more recipes.  The two recipes for Rules do offer very valuable advice.  The two workflow recipes seem less valuable.Chapter 11 takes into the area where the greatest performance optimizations are to be found, the SOA composite itself.  7 generally useful recipes are provided, and I would have liked to see more in this chapter, perhaps at the expense of some of the java tuning in the first half of the book.  I have to say that I do not agree with the “Designing BPEL processes to reduce persistence” recipe, there are better more maintainable and faster ways to deal with this.  The other recipes provide valuable ideas that may help performance of your composites.Chapter 12 promises “High Performance Configuration”.  Three of the recipes on creating a cluster, configuring an HTTP plug in and setting up distributed queues are covered better in the Oracle documentation, particularly the Enterprise Deployment Guide.  There are however some good suggestions in the recipe about deploying on virtualized environments, I wish they had spoken more about this.  The use of JMS bridges recipe is also a very valuable one that people should be aware of.The Good, the Bad, and the UglyA lot of the recipes are really just trivial variations on other recipes, for example they have one recipe on “Increasing the JVM heap size” and another on “Setting Xmx and Xms to the same value”.Although the book spends a lot of time on Java tuning, that of itself is reasonable as a lot fo SOA performance tuning is tweaking JVM and WLS parameters.  I would have found it more useful if the dots were connected to relate the Java/WLS tuning sections to specific SOA use cases.As the authors say when talking about adapter tuning “The preceding sets of recipes are the basics … available in Oracle SOA Suite. There are many other properties that can be tuned, but their effectiveness is situational, so we have chosen to focus on the ones that we feel give improvement for the most projects.”.  They have made a good start, and maybe in a 12c version of the book they can provide more SOA specific information in their Java tuning sections.Add the book to your library, you are almost certain to find useful ideas in here, but make sure you understand the implications of the changes you are making, the authors do not always spell out the impact on the semantics of your composites.A sample chapter is available on the Packt Web Site.

Just received this to review. It’s a Java World The first chapter identifies tools and methods to identify performance bottlenecks, generally covering low level JVM and database issues.  Useful material...

Development

WebLogic Admin Cookbook Review

Review of Oracle WebLogic Server 12c Advanced Administration CookbookLike all of Packts cookbook titles, the book follows a standard format of a recipe followed by an explanation of how it works and then a discussion of additional recipe related features and extensions.When reading this book I tried out some of the recipes on an internal beta of 12.1.2 and they seemed to work fine on that future release.The book starts with basic installation instructions that belie its title.  The author is keen to use console mode, which is often needed for servers that have no X11 client libraries, however for all but the most simple of domains I find console mode very slow and difficult to use and would suggest that where possible you persuade the OS admin to make X11 client libraries available, at least for the duration of the domain configuration.Another pet peeve of mine is using nohup to start servers/services and not redirecting output, with the result that you are left with nohup.out files scattered all over your disk.  The book falls into this trap.However we soon sweep into some features of WebLogic that I believe are less understood such as using the pack/unpack commands and customizing the console screen.  The “Protecting changes in the Administration Console” recipe is particularly useful.The next chapter covers HA configuration.  One of the nice things about this book is that most recipes are illustrated not only using the console but also using WLST.  The coverage of multiple NICs and dedicated network channels is very useful in the Exalogic world as well as regular WLS clusters.  One point I would quibble with is the setting up of HA for the AdminServer.  I would always do this with a shared file system rather than copying files around, I would also prefer a floating IP address to avoid having to update the DNS.Other chapters cover JDBC & JMS, Monitoring, Stability, Performance and Security.Overall the recipes are useful, I certainly learned some new ways of doing things.  The WLST example code is a real plus.  Well worth being added to your WebLogic Admin library.The book is available on the Packt website.

Review of Oracle WebLogic Server 12c Advanced Administration Cookbook Like all of Packtscookbook titles, the book follows a standard format of a recipe followed by an explanation of how it works and...

SOA Suite

SOA Suite 11g Developers Cookbook Published

SOA Suite 11g Developers Cookbook AvailableJust realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year!In some ways this was an easier book to write than the Developers Guide, the hard bit was deciding what recipes to include.  Once we had decided that the writing of the book was pretty straight forward.The book focuses on areas that we felt we had neglected in the Developers Guide, and so there is more about Java integration and OSB, both of which we see a lot of questions about when working with customers.Amazon has a couple of reviews.Table of ContentsChapter 1: Building an SOA Suite ClusterChapter 2: Using the Metadata Service to Share XML ArtifactsChapter 3: Working with TransactionsChapter 4: Mapping DataChapter 5: Composite Messaging PatternsChapter 6: OSB Messaging PatternsChapter 7: Integrating OSB with JSONChapter 8: Compressed File Adapter PatternsChapter 9: Integrating Java with SOA SuiteChapter 10: Securing Composites and Calling Secure Web ServicesChapter 11: Configuring the Identity ServiceChapter 12: Configuring OSB to Use Foreign JMS QueuesChapter 13: Monitoring and ManagementMore ReviewsIn addition to the Amazon Reviews I also found some reviews on GoodReads.

SOA Suite 11g Developers Cookbook Available Just realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year! In...

Fusion Middleware

Target Verification

Verifying the TargetI just built a combined OSB, SOA/BPM, BAM clustered domain.  The biggest hassle is validating that the resource targeting is correct.  There is a great appendix in the documentation that lists all the modules and resources with their associated targets.  The only problem is that the appendix is six pages of small print.  I manually went through the first page, verifying my targeting, until I thought ‘there must be a better way of doing this’.  So this blog post is the better way WLST to the RescueWebLogic Scripting Tool allows us to query the MBeans and discover what resources are deployed and where they are targeted.  So I built a script that iterates over each of the following resource types and verifies that they are correctly targeted:Applications Libraries Startup Classes Shutdown Classes JMS System Resources WLDF System ResourcesSource DataTo get the data to verify my domain against, I copied the tables from the documentation into a text file.  The copy ended up putting the resource on the first line and the targets on the second line.  Rather than reformat the data I just read the lines in pairs, storing the resource as a string and splitting apart the targets into a list of strings.  I then stored the data in a dictionary with the resource string as the key and the target list as the value.  The code to do this is shown below:# Load resource and target data from file created from documentation# File format is a one line with resource name followed by# one line with comma separated list of targets# fileIn - Resource & Target File# accum - Dictionary containing mappings of expected Resource to Target# returns - Dictionary mapping expected Resource to expected Targetdef parseFile(fileIn, accum) :  # Load resource name  line1 = fileIn.readline().strip('\n')  if line1 == '':    # Done if no more resources    return accum  else:    # Load list of targets    line2 = fileIn.readline().strip('\n')    # Convert string to list of targets    targetList = map(fixTargetName, line2.split(','))    # Associate resource with list of targets in dictionary    accum[line1] = targetList    # Parse remainder of file    return parseFile(fileIn, accum)This makes it very easy to update the lists by just copying and pasting from the documentation.Each table in the documentation has a corresponding file that is used by the script.The data read from the file has the target names mapped to the actual domain target names which are provided in a properties file.Listing & Verifying the Resources & TargetsWithin the script I move to the domain configuration MBean and then iterate over the resources deployed and for each resource iterate over the targets, validating them against the corresponding targets read from the file as shown below:# Validate that resources are correctly targeted# name - Name of Resource Type# filename - Filename to validate against# items - List of Resources to be validateddef validateDeployments(name, filename, items) :  print name+' Check'  print "====================================================="  fList = loadFile(filename)  # Iterate over resources  for item in items:    try:      # Get expected targets for resource      itemCheckList = fList[item.getName()]      # Iterate over actual targets      for target in item.getTargets() :        try:          # Remove actual target from expected targets          itemCheckList.remove(target.getName())        except ValueError:          # Target not found in expected targets          print 'Extra target: '+item.getName()+': '+target.getName()      # Iterate over remaining expected targets, if any      for refTarget in itemCheckList:        print 'Missing target: '+item.getName()+': '+refTarget    except KeyError:      # Resource not found in expected resource dictionary      print 'Extra '+name+' Deployed: '+item.getName()  printObtaining the ScriptI have uploaded the script here.  It is a zip file containing all the required files together with a PDF explaining how to use the script. To install just unzip VerifyTargets.zip. It will create the following files verifyTargets.sh verifyTargets.properties VerifyTargetsScriptInstructions.pdf scripts/verifyTargets.py scripts/verifyApps.txt scripts/verifyLibs.txt scripts/verifyStartup.txt scripts/verifyShutdown.txt scripts/verifyJMS.txt scripts/verifyWLDF.txtSample OutputThe following is sample output from running the script:Application Check=====================================================Extra Application Deployed: frevvoMissing target: usermessagingdriver-xmpp: optionalMissing target: usermessagingdriver-smpp: optionalMissing target: usermessagingdriver-voicexml: optionalMissing target: usermessagingdriver-extension: optionalExtra target: Healthcare UI: soa_clusterMissing target: Healthcare UI: SOA_Cluster ??Extra Application Deployed: OWSM Policy Support in OSB Initializer AplicationLibrary Check=====================================================Extra Library Deployed: oracle.bi.adf.model.slib#1.0@11.1.1.2.0Extra target: oracle.bpm.mgmt#11.1.1@11.1.1: AdminServerMissing target: oracle.bpm.mgmt#11.1.1@11.1.1: soa_clusterExtra target: oracle.sdp.messaging#11.1.1@11.1.1: bam_clusterStartupClass Check=====================================================ShutdownClass Check=====================================================JMS Resource Check=====================================================Missing target: configwiz-jms: bam_clusterWLDF Resource Check=====================================================IMPORTANT UPDATESince posting this I have discovered a number of issues.  I have updated the configuration files to correct these problems.  The changes made are as follows:Added WLS_OSB1 server mapping to the script properties file (verifyTargets.properties) to accommodate OSB singletons and modified script (verifyTargets.py) to use the new property. Changes to verifyApplications.txt Changed target from OSB_Cluster to WLS_OSB1 for the following applications: ALSB Cluster Singleton Marker Application ALSB Domain Singleton Marker Application Message Reporting PurgerAdded following application and targeted at SOA_Cluster frevvoAdding following application and targeted at OSB_Cluster & Admin Server OWSM Policy Support in OSB Initializer AplicationChanges to verifyLibraries.txt Adding following library and targeted at OSB_Cluster, SOA_Cluster, BAM_Cluster & Admin Server oracle.bi.adf.model.slib#1.0@11.1.1.2.0Modified targeting of following library to include BAM_Cluster oracle.sdp.messaging#11.1.1@11.1.1 Make sure that you download the latest version.  It is at the same location but now includes a version file (version.txt).  The contents of the version file should be:FMW_VERSION=11.1.1.7SCRIPT_VERSION=1.1

Verifying the Target I just built a combined OSB, SOA/BPM, BAM clustered domain.  The biggest hassle is validating that the resource targeting is correct.  There is a great appendix in the...

Fusion Middleware

Event Processed

Installing Oracle Event Processing 11gEarlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP).CEP has a developer side based on Eclipse and a runtime environment.Server installThe server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are:Download required software JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Developer InstallThe developer install requires several steps (documentation).  A developer install needs access to the software for the server install, although JRockit isn’t necessary for development use.Download required software Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios) Install Eclipse Unzip the download into the desired directory Start Eclipse Add Oracle CEP Repository in Eclipse http://download.oracle.com/technology/software/cep-ide/11/ Install Oracle CEP Tools for Eclipse 3.6 You may need to set the proxy if behind a firewall. Modify eclipse.ini If using Windows edit with wordpad rather than notepad Point to 1.6 JVM Insert following lines before –vmargs -vm \PATH_TO_1.6_JDK\jre\bin\javaw.exe Increase PermGen Memory Insert following line at end of file -XX:MaxPermSize=256M Restart eclipse and verify that everything is installed as expected.Voila The Deed Is DoneWith CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server.Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002.With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation.Much easier than organizing a Family History Day!

Installing Oracle Event Processing 11g Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of...

SOA Suite

Following the Thread in OSB

Threading in OSB The ScenarioI recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic:1. Receive Request 2. Send Request to External System 3. If Response has a particular value   3.1 Modify Request   3.2 Resend Request to External System 4. Send Response back to Requestor All looks very straightforward and no nasty wrinkles along the way.  The flow was implemented in OSB as follows (see diagram for more details):Proxy Service to Receive Request and Send Response Request Pipeline   Copies Original Request for use in step 3 Route Node   Sends Request to External System exposed as a Business Service Response Pipeline   Checks Response to Check If Request Needs to Be Resubmitted Modify Request Callout to External System (same Business Service as Route Node) The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool.The SurpriseImagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads.  The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post. Basic Thread ModelOSB goes to great lengths to avoid holding on to threads.  Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node.Most Business Services are implemented by OSB in two parts.  The first part uses the request thread to send the request to the target.  In the diagram this is represented by the thread T1.  After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from.  A multiplexor (muxer) is used to wait for the response.  When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2.OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service.  In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service.  Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request.This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage.First WrinkleNote that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation.  For example:Request Pipeline makes a blocking callout, say to perform a database read. Business Service response tries to allocate a thread from thread pool but all threads are blocked in the database read. New requests arrive and contend with responses arriving for the available threads. Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example.SolutionThe solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads.Do Nothing Route Thread ModelSo what happens if there is no route node?  In this case OSB just echoes the Request message as a Response message, but what happens to the threads?  OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager.So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager. Proxy Chaining Thread ModelSo what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline?Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines.  Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node.  This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”.Call Out Threading ModelSo what happens when you make a Service Callout to a Business Service from within a pipeline.  The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout.  What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread.  This implies that the pipeline thread blocks waiting for a response.  It is the handling of this response that behaves in an unexpected way.When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response.  The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available.  The original pipeline thread can then continue to process the response.Second WrinkleThis leads to an unfortunate wrinkle.  If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur.  The scenario is as follows:Pipeline makes a Callout and the thread is suspended but still allocatedMultiple Pipeline instances using the same Work Manager are in this state (common for a system under load)Response comes back but all Work Manager threads are allocated to blocked pipelines.Response cannot be processed and so pipeline threads never unblock – deadlock!SolutionThe solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself.The Solution to My ProblemLooking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout.  This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed.The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service.  This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager.SummaryRequest PipelineExecutes on Proxy Work Manager (WM) Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.Route NodeRequest sent using Proxy WM ThreadProxy WM Thread is released before getting responseMuxer is used to handle responseMuxer hands off response to Business Service (BS) WMResponse PipelineExecutes on Routed Business Service WM Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.No Route Node (Echo functionality)Proxy WM thread releasedNew thread from the default WM used for response pipelineService CalloutRequest sent using proxy pipeline threadProxy thread is suspended (not released) until the response comes backNotification of response handled by BS WM thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.Note this is a very short lived use of the threadAfter notification by callout BS WM thread that thread is released and execution continues on the original pipeline thread.Route/Callout to Proxy ServiceRequest Pipeline of callee executes on requestor threadResponse Pipeline of caller executes on response thread of requested proxyThrottling Request message may be queued if limit reached.Requesting thread is released (route node) or suspended (callout)So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline.  This was the problem we were having.You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline.It also means you may want to have different work managers for the proxy and business service in the route node.Basically you need to think carefully about how threading impacts your proxy services.ReferencesThanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me.  Any errors are my own and not theirs.  Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me.OSB Thread ModelGreat Blog Post on Thread Usage in OSB

Threading in OSB The Scenario I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic: 1. Receive Request 2. Send Request to External System3....

Open World Day 3

A Day in the Life of an Oracle OpenWorld Attendee Part IVMy third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers.Highly Available EBS UpgradesA few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business.Mobile on WebLogicI have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations.Sharepoint Connector for WebCenter ContentObviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter. Load BalancingThe Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo!Other StuffI saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children!A Quiet NightWednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be...

Open World Day 1 Continued

A Day in the Life of an Oracle OpenWorld Attendee Part IIA couple of things I forgot to mention about yesterdays OpenWorld.First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments.The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which...

OpenWorld Day 1

A Day in the Life of an OpenWorld Attendee Part ILots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend!With 50,000 attendees I didn’t expect to bump into too many people I knew, boy was I wrong!  I walked into the registration area and immediately was hailed by a couple of customers I had worked with a few months ago.  Moving to the employee registration area in a different hall I bumped into a colleague from the UK who was also registering.  As soon as I got my badge I bumped into a friend from Ireland!  So maybe OpenWorld isn’t so big after all!First port of call was Larrys Keynote.  As always Larry was provocative and thought provoking.  His key points were announcing the Oracle cloud offering in IaaS, PaaS and SaaS, pointing out that Fusion Apps are cloud enabled and finally announcing the 12c Database, making a big play of its new multi-tenancy features.  His contention was that multi-tenancy will simplify cloud development and provide better security by providing DB level isolation for applications and customers.Next day, Monday, was my first full day at OpenWorld.  The first session I attended was on monitoring of OSB, very interesting presentation on the benefits achieved by an Illinois area telco – US Cellular.  Great discussion of why they bought the SOA Management Packs and the benefits they are already seeing from their investment in terms of improved provisioning and time to market, as well as better performance insight and assistance with capacity planning.Craig Blitz provided a nice walkthrough of where Coherence has been and where it is going.Last night I attended the BOF on Managed File Transfer where Dave Berry replayed Oracles thoughts on providing dedicated Managed File Transfer as part of the 12c SOA release.  Dave laid out the perceived requirements and solicited feedback from the audience on what if anything was missing.  He also demoed an early version of the functionality that would simplify setting up MFT in SOA Suite and make tracking activity much easier.So much for Day 1.  I also ran into scores of old friends and colleagues and had a pleasant dinner with my friend from Ireland where I caught up on the latest news from Oracle UK.  Not bad for Day 1!

A Day in the Life of an OpenWorld Attendee Part I Lots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend! With 50,000...

SOA Suite

Deploying Fusion Order Demo on 11.1.1.6

How to Deploy Fusion Order Demo on SOA Suite 11.1.1.6We need to build a demo for a customer, why not use Fusion Order Demo (FOD) and modify it to do some extra things.  Great idea, let me install it on one of my Linux servers I said…Turns out there are a few gotchas, so here is how I installed it on a Linux server with JDeveloper on my Windows desktop.Task 1: Install Oracle JDeveloper StudioI already had JDeveloper 11.1.1.6 with SOA extensions installed so this was easy.Task 2: Install the Fusion Order Demo ApplicationFirst thing to do is to obtain the latest version of the demo from OTN, I obtained the R1 PS5 release.Gotcha #1 – my winzip wouldn’t unzip the file, I had to use 7-Zip.Task 3: Install Oracle SOA SuiteOn the domain modify the setDomainEnv script by adding “-Djps.app.credential.overwrite.allowed=true” to JAVA_PROPERTIES and restarting the Admin Server.Also set the JAVA_HOME variable and add Ant to the path.I created a domain with separate SOA and BAM servers and also set up the Node Manager to make it easier to stop and start components.Taking a Look at the WebLogic Fusion Order Demo ApplicationNote that when opening the composite you will get warnings because the components are not yet deployed to MDS.Deploying Fusion Order DemoTask 1: Create a Connection to an Oracle WebLogic ServerIf some tests complete when you test the connection to the WebLogic domain but other tests fail, for example the JSR-88 tests, then you may need to go into the console and under each servers Configuration->General->Advanced setting, set the “External Listen Address” to be the name that JDeveloper uses to access the managed server.Task 2: Create a Connection to the Oracle BAM ServerI can’t understand why customers wouldn’t want to use BAM.  Monitor Express makes it a matter of a few clicks to provide real time process status information to the business.Oh yes!  I remember now, several customers IT staff have told me they don’t want the business seeing this data because they will hassle the IT department if something goes wrong, and BAM lets them see it going wrong in real time…Task 3: Install the Schema for the Fusion Order Demo ApplicationWhen editing the Infrastructure->MasterBuildScript->Resources build.properties make sure that you set jdeveloper.home to the jdeveloper directory underneath the directory that you installed JDeveloper into.  I installed JDeveloper Studio into “C:\JDev11gPS5” so my jdeveloper.home is “C:\JDev11gPS5\jdeveloper”.Gotcha #2 – the ant script throws an error partway through but does not report it at the end, so check carefully for the following error:oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.lang.NoClassDefFoundError, msg=oracle/jdbc/OracleClobThis occurs because the build.xml does not include the ojdbc6dms library which have CLOB support so in JDeveloper add it to the path “oracle.jdbc.path” in Infrastructure->/DatabaseSchema/Resources build.xml file.<path id="oracle.jdbc.path">   <fileset dir="${jdeveloper.home}/../wlserver_10.3/server/lib">     <include name="ojdbc6.jar"/>   </fileset>   <fileset dir="${jdeveloper.home}/../oracle_common/modules/oracle.jdbc_11.1.1">     <include name="ojdbc6dms.jar"/>   </fileset> </path>Rerun the ant script from Infrastructure/Ant with a target of “buildAll” and it should now complete without errors.Task 4: Set the Configuration Property for the Store Front ModuleNothing to watch out for here.Task 5: Edit the Database ConnectionNothing to watch out for here.Task 6: Deploy the Store Front ModuleThere is an additional step when deploying, you will be asked for an MDS repository to use.  Best to use the MDS-SOA repository and put the content in its own partition.Gotcha #3 –when prompted select the mds-soa MDS repository and choose the soa.  Note that this is an MDS partition, not a SOA partition.Note that when you deploy the StoreFrontServiceSDO_Services application it will populate the local WebLogic LDAP with the demo users.  If this step fails it will be because you forgot to set the “-Djps.app.credential.overwrite.allowed=true” parameter and restart the Admin Server.Task 7: Deploy the WebLogic Fusion Order Demo ApplicationSet up the environment for BAM.When editing the WebLogicFusionOrderDemo->bin->Resources build.properties make sure that you set oracle.home to the jdeveloper directory underneath the directory that you installed JDeveloper into.  I installed JDeveloper Studio into “C:\JDev11gPS5” so my oracle.home is “C:\JDev11gPS5\jdeveloper”.Gotcha #5a – Make sure that you create directories on the server for the FileAdapter to use as a file directory and a separate control directory and make sure you set the corresponding properties in the build.properties file:orderbooking.file.adapter.dir orderbooking.file.adapter.control.dir Gotcha #5b – Also make sure you set the following properties in the build.properties file:soa.domain.name managed.server.host managed.server.rmi.port soa.db.username soa.db.password soa.db.connectstring Note that the soa.server.oracle.home property must be set to the ORACLE_SOA_HOME (usually Oracle_SOA1 under the MW_HOME) on the server.Gotcha #6 – I found that unless I went into the console to each servers Configuration->Protocols->IIOP->Advanced setting, and set the “Default IIOP Username” and “Default IIOP Password” to be the weblogic user the deployment failed.Gotcha #7 – when deploying BAM objects in seedBAMServerObjects activity I got an exception “java.lang.NoClassDefFoundError: org/apache/commons/codec/binary/Base64” which is caused because the BAM installation under JDeveloper does not have all the required libraries.  To fix this copy the commons-codec-1.3.jar file from the server machine ORACLE_SOA_HOME/bam/modules/oracle.bam.third.party_11.1.1 to the JDev machine ORACLE_JDEV_HOME/bam/modules/oracle.bam.third.party_11.1.1.Gotcha #8 – when deploying BAM objects in seedBAMServerObjects activity I got an error “BAM-02440: ICommand is unable to connect to the Oracle BAM server because user credentials (username/password) have not been specified.”.  The quick way to fix this is to change to the directory where the import script was created on the JDeveloper machine (ORACLE_JDEV_HOME\bam\dataObjects\load) and run the load script after setting the JAVA_HOME..\..\bin\icommand -CMDFILE ImportFODBamObjects.xmlI am sure if I spent more time in the ant scripts I could have found what was wrong with the script for deploying this.Running Fusion Order DemoYou are now ready to place an order through the frontend app at http://soahost:soaport/StoreFrontModule/faces/home.jspx.  The BAM dashboard is available for you to monitor the progress of your order and EM is all set to let you monitor the health of the processes.  Enjoy studying a relatively complex example that demonstrates many best practices such as use of MDS.

How to Deploy Fusion Order Demo on SOA Suite 11.1.1.6 We need to build a demo for a customer, why not use Fusion Order Demo (FOD) and modify it to do some extra things.  Great idea, let me install it...

Development

Whos Port Is It?

Who Owns What Port?It is not uncommon to be unable to start a server process because some other process is holding onto a network port that is required by the server.  The question is how do you find the offending process?  I thought I would identify some of the commands I use to track down wayward port usage.Identify the ConflictThe first thing to do is to identify the port that is being used.  Hopefully your log file will indicate which port the server process was unable to obtain.  Even if it did not identify the port and you know the ports that it requires then you can use the first of my helpful commands:WindowsLinuxnetstat –anop tcpnetstat –lnt --programThe Windows version lists all “-a” network sockets using TCP/IP v4 “-p tcp”.  To make it easier to find the listening port I had them list in numeric format “-n” rather than using abbreviations.  Other possible protocols are TCP/IP v6 “tcpv6”, UDP “udp” and UDP v6 “udpv6”.  Finally I had the netstat command print out the process ID “-o” of the process using the port.The Linux version is slightly different in that it lists only listening ports “-l” in numeric form “-n” for the TCP protocols “tcp”, both V4 and V6.  For UDP protocol use “-u”.  The process ID and program name is also displayed “—program”.  Note that this is best run as root because you need root privileges to have netstat show you the pid of a process you don’t own.Find the CulpritNow that we know which process is holding which port in use the next thing to do is find out more about the process holding onto our port.  The second of our helpful commands shows us the command line used to launch are mischievous process.WindowsLinuxtasklist /FI “PID eq <PID>”ps –p <PID> –o argsUnfortunately I haven’t found a good way to find out the actual command line from a Windows machine.  Tasklist allows you to filter “/FI” the list of tasks to see the process name associated with a PID “PID eq <PID>, but if that process is a service then the process name will show as “svchost.exe”.  You may be able to see more information by using Windows Process Explorer, but even that doesn’t always tell you what you need to know.On Linux we can use the trusty ps command to find a given pid “-p <PID>” and output the command and associated command line arguments “-o args”.  From this we know exactly who is using our PID.Armed with this information we can reconfigure the errant process, shut it down or decide that we need to change the port number for our server process instead.

Who Owns What Port? It is not uncommon to be unable to start a server process because some other process is holding onto a network port that is required by the server.  The question is how do you find...

Fusion Middleware

Scripting WebLogic Admin Server Startup

How to Script WebLogic Admin Server StartupMy first car was a 14 year old Vauxhall Viva.  It is the only one of my cars that has ever been stolen, and to this day how they stole it is a mystery to me as I could never get it to start.  I always parked it pointing down a steep hill so that I was ready to jump start it!  Of course its ability to start was dramatically improved when I replaced the carburetor butterfly valve!Getting SOA Suite or other WebLogic based systems to start can sometimes be a problem because the default WebLogic start scripts require you to stay logged on to the computer where you started the script.  Obviously this is awkward and a better approach is to run the script in the background.  This problem can be avoided by using a WLST script to start the AdminServer but that is more work, so I never bother with it.If you just run the startup script in the background the standard output and standard error still go to the session where you started the script, not helpful if you log off and later want to see what is happening.  So the next thing to do is to redirect standard out and standard error from the script.Finally it would be nice to have a record of the output of the last few runs of the Admin Server, but these should be purged to avoid filling up the directory.Doing the above three tasks is the job of the script I use to start WebLogic.  The script is shown below:Startup Script#!/bin/sh# SET VARIABLESSCRIPT_HOME=`dirname $0`MW_HOME=/home/oracle/app/MiddlewareDOMAIN_HOME=$MW_HOME/user_projects/domains/dev_domainLOG_FILE=$DOMAIN_HOME/servers/AdminServer/logs/AdminServer.out# MOVE EXISTING LOG FILElogrotate -f -s $SCRIPT_HOME/logrotate.status $SCRIPT_HOME/AdminServerLogRotation.cfg#RUN ADMIN SERVERtouch $LOG_FILEnohup $DOMAIN_HOME/startWebLogic.sh &> $LOG_FILE &tail -f $LOG_FILEExplanationLets walk through each section of the script.SET VARIABLESThe first few lines of the script just set the environment.  Note that I put the output of the start script into the same location and same filename that it would go to if I used the Node Manager to start the server.  This keeps it consistent with other servers that are started by the node manager.MOVE EXISTING LOG FILEThe next section keeps a copy of the previous output file by using the logrotate command.  This reads its configuration from the “AdminServerLogRotation.cfg” file shown below:/home/oracle/app/Middleware/user_projects/domains/dev_domain/servers/AdminServer/logs/AdminServer.out {  rotate 10  missingok}This tells the logrotate command to keep 10 copies (rotate 10) of the log file and if there is no previous copy of the log file that is not an error condition (missingok).The logrotate.status file is used by logrotate to keep track of what it has done.  It is ignored when the –f flag is used, causing the log file to be rotated every time the command is invoked.RUN ADMIN SERVERUPDATE: Sometimes the tail command starts before the shell has created the log file for the startWebLogic.sh command. To avoid an error in the tail command I "touch" the log file to make sure that it is there.The final section actually invokes the standard command to start an admin server (startWebLogic.sh) and redirects the standard out and standard error to the log file.  Note that I run the command in the background and set it to ignore the death of the parent shell.Finally I tail the log file so that the user experience is the same as running the start command directly.  However in this case if I Ctrl-C the command only the tail will be terminated, the Admin Server will continue to run as a background process.This approach allows me to watch the output of the AdminServer but not to shut it down if I accidently hit Ctrl-C or close the shell window.Restart ScriptI also have a restart script shown below:#!/bin/sh# SET VARIABLESSCRIPT_HOME=`dirname $0`MW_HOME=/home/oracle/app/MiddlewareDOMAIN_HOME=$MW_HOME/user_projects/domains/dev_domain# STOP ADMIN SERVER$DOMAIN_HOME/bin/stopWebLogic.sh# RUN ADMIN SERVER$SCRIPT_HOME/startAdminServer.shThis is just like the start script except that it runs the stop weblogic command followed by my start script command.SummaryThe above scripts are quick and easy to put in place for the Admin Server and make the stdout and stderr logging consistent with other servers that are started from the node manager.  Now can someone help me push start my car!

How to Script WebLogic Admin Server Startup My first car was a 14 year old Vauxhall Viva.  It is the only one of my cars that has ever been stolen, and to this day how they stole it is a mystery to me...

SOA Suite

Memory Efficient Windows SOA Server

Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows ServerWell 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain.Required Software64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs.Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments.Installation StepsThe following flow chart outlines the steps required in installing and configuring SOA Suite.The steps in the diagram are explained below.64-bit?Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit.Install 64-bit JDKThe 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6.Install WebLogicIf you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation.SOA Suite Required?If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain.Install SOA SuiteRun the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter.Database Available?Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database).Install DatabaseI use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters).Run RCUThe SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete.Run Config WizardThe Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected.The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process.If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA.If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”.To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer.Installation IssuesI had a few problems when I came to test everything in my mega-JVM.Following applications were not targeted and so I needed to target them at the AdminServer:b2buicomposerHealthcare UIFMW Welcome Page Application (11.1.0.0.0)How Memory Efficient is It?On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below:set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048mset PORT_MEM_ARGS=-Xms1536m -Xmx2048mset DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768mset PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768mI arrived at these numbers by monitoring JVM memory usage in JConsole.Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM.Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the...

Miscellaneous

My Hiring Approach

Hiring EngineersI recently had the privilege of performing the technical interviews to evaluate potential new hires into Oracles support organization.  As my approach is different from many interview processes I thought I would share it with you.  It is basically a three step process.Step 1 – What Do You Know?We ask them technical questions about what they said they have done on their resume.  Very common to get responses like, oh I didn't do very much with that.  In that case we mark them down, if you you put it on the resume then we will ask you detailed questions, for example if they have "worked" with enterprise Java then we ask about the meaning of EJB transaction settings ("what is the different between Required and Mandatory") and get them to explain the JSP->Servlet lifecycle ("a jsp is deployed as a mixture of HTML and Java source, how does it become executable code").  This is really just an honesty and level setting phase where we see if what they said on the resume is accurate and they understand what they said they understand.Step 2 – Can You Extrapolate Your Knowledge?After testing if they know what thy said they know we ask them questions a little outside their area of knowledge to see if they can extrapolate from what they know.  We encourage them to guess an answer, we want to see if they understand principles and can come up with a reasonable response, the response doesn't have to be correct, we are looking for a plausible, but possibly wrong, solution.  If they won't guess we mark them down, if they guess wrong but have good reasons we mark them the same as if they got it right.  This puts them in to the typical support area of trying to solve a customers problem where you have to make informed guesses and be able to justify to the customer why you want them to do this.Step 3 – Can You Solve ProblemsThen we ask them to troubleshoot a specific problem.  For example "yesterday you deployed a new application and it worked fine for users, today they are reporting they get errors saying database unavailable".  We want to see if they can take a big picture and narrow down the problem area in a sensible way.  We are really looking for them to grasp the big picture of which components are being used and then to describe how they would isolate the problem (test if database is running, test network connectivity, check can log in to db from app server etc.).  Again this is directly relevant to the support job and we want them to demonstrate that they know how to troubleshoot - just saying I would look in the logs will get them marked down.ResultSadly it seems a lot of people are better resume writers than engineers, but this process tends to weed out those individuals.  We were able to hire some excellent engineers based on the above process and shortly after joining Oracle they were making great contributions to the company so it seemed to work.  Of course a technical interview is only part of the process.  It is also important that engineers fit into the culture of the company.  So an engineer might pass the technical interview but still fail because the interviewer felt they wouldn’t fit into the Oracle culture.  So the above is a process to help in evaluating technical skills but there is more to hiring than just that.

Hiring Engineers I recently had the privilege of performing the technical interviews to evaluate potential new hires into Oracles support organization.  As my approach is different from many...

Development

Using Coherence with JDeveloper

Configuring JDeveloper for use with CoherenceDoing some work with Coherence again and so I needed to create some Java code calling Coherence API and edit some Coherence configuration files in JDeveloper.  The easiest way to do this is to register the Coherence jar file and the Coherence Schemas with JDeveloper, once that is done then you can use JDevelopers XML insight features to help you create the XML documents.Register the Coherence LibraryTo register the Coherence jar file in JDeveloper go to “Tools->Manage Libraries…”, select “New…” and then use “Add Entry…” to add the following entries:Class Path <COHERENCE_HOME>\lib\coherence.jar Doc Path <COHERENCE_HOME>\doc\api COHERENCE_HOME is the location where you unzipped the Coherence product.This lets us use the Coherence API in our Java code by adding the library to our project.Register the SchemasTo register the Coherence XML Schemas with JDevelper go to “Tools->Preferences…”, select “XML Schemas” and choose “Add…”.Browse to the <COHERENCE_HOME>\lib\coherence.jar file and add the following schemas:coherence-cache-config.xsd coherence-operational-config.xsd coherence-pof-config.xsd coherence-report-config.xsd coherence-report-group-config.xsd coherence-rest-config.xsd Now when you create an XML file for use with Coherence you can choose “XML Document from XML Schema” and choose “Use Registered Schemas” to show you suitable schemas to use for your Coherence config.

Configuring JDeveloper for use with Coherence Doing some work with Coherence again and so I needed to create some Java code calling Coherence API and edit some Coherence configuration files in...

SOA Suite

Too Much Debug

Too Much DebugWell it is Christmas and as is traditional, in England at least, we had roast turkey dinner.  And of course no matter how big your family, turkeys come in only two sizes; massively too big or enormously too big!  So by the third day of Christmas you are ready never to eat turkey again until thanksgiving.  Your trousers no longer fit around the waist, your sweater is snug around the midriff, and your children start talking about the return of the Blob.And my point?  Well just like the food world, sometimes in the SOA world too much of a good thing is bad for you.  I had just extended my BPM domain with OSB only to discover that I could no longer start the BAM server, or the newly configured OSB server.  The error message I was getting was:starting weblogic with Java version:FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)ERROR: transport error 202: bind failed: Address already in useERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]Starting WLS with line:C:\app\oracle\product\FMW\JDK160~2\bin\java -client -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8453,server=y,suspend=n …The mention of JDWP points to a problem with debug settings and sure enough in a development domain the setDomainEnv script is set up to enable debugging of OSB.  The problem is that the settings apply to all servers started with settings in the setDomainEnv script and should really only apply to the OSB servers.  There is a blog entry by Jiji Sasidharan that explains this and provides a fix.  However the fix disables the debug flag for all servers, not just the non-OSB servers.  So I offer my extension to this fix which modifies the setDomainEnv script as follows from:set debugFlag=trueto:rem Added so that only OSB server starts in debug modeif "%SERVER_NAME%"=="osb_server1" (    set debugFlag=true)This enables debugging to occur on managed server osb_server1 (this should match the name of one of your OSB servers to enable debugging).  It does not enable the debug flag for any other server, including other OSB servers in a cluster.  After making this change it may be necessary to restart the Admin Server because it is probably bound to the debug port.So the moral of this tale is don’t eat too much turkey, don’t abuse the debug flag, but make sure you can get the benefits of debugging.Have a great new year!

Too Much Debug Well it is Christmas and as is traditional, in England at least, we had roast turkey dinner.  And of course no matter how big your family, turkeys come in only two sizes; massively...

Fusion Middleware

SOA in a Windows World

Installing BPM Suite on Windows Server 2008 Domain Controller under VirtualBoxIt seems I am working with a number of customers for whom Windows is an important part of their infrastructure.  Security is tied in with Windows Active Directory and many services are hosted using Windows Communication Framework.  To better understand these customers environment I got myself a Windows 2008 server license and decided to install BPM Suite on a Windows 2008 Server running as a domain controller.  This entry outlines the process I used to get it to work.The EnvironmentI didn’t want to dedicate a physical server to running Windows Server so I installed it under Oracle Virtual Box.My target environment was Windows 2008 Server with Active Directory and DNS roles.  This would give me access to the Microsoft security infrastructure and I could use this to make sure I understood how to properly integrate WebLogic and SOA Suite security with Windows security.  I wanted to run Oracle under a non-Administrator account, as this is often the way I have to operate on customer sites.  This was my first challenge.  For very good security reasons the only accounts allowed to log on to a Windows Domain controller are domain administrator accounts.  Now I only had resources (and licenses) for a single Windows server so I had to persuade Windows to let me log on with a non-Domain Admin account.Logging On with  a non-Domain Admin AccountI found this very helpful blog entry on how to log on using a non-domain account - Allow Interactive Logon to Domain Controllers in Windows Server 2008.  The key steps from this post are as follows:Create a non-Admin user – I created one called “oracle”. Edit the “Default Domain Controllers” group policy to add the user “oracle” to the “Allow log on locally” policy in Computer Configuration > Policies > Windows Settings >Security Settings > Local Policies > User Rights Assignment: Force a policy update. If you didn’t get it right then you will get the following error when trying to logon "You cannot log on because the logon method you are using is not allowed on this computer".  This means that you correctly created the user but the policy has not been modified correctly.Acquiring SoftwareThe best way to acquire the software needed is to go to the BPM download page.  If you choose the Microsoft Windows 32bit JVM option you can get a list of all the required components and a link to download them directly from OTN.  The only download link I didn’t use was the database download because I opted for an 11.2 database rather than the XE link that is given.  The only additional software I added was the 11.1.1.5 BPM feature pack (obtain from Oracle Support as patch #12413651: 11.1.1.5.0 BPM FEATURES PACK) and the OSB software.  The BPM feature pack patch is applied with OPatch so I also downloaded the latest OPatch from Oracle support (patch 6880880 for 11.1.0.x releases on Windows 32-bit).Installing Oracle DatabaseI began by setting the system environment variable ORACLE_HOSTNAME to be the hostname of the my Windows machine.  I also added this hostname to the hosts file, mapping it to 127.0.0.1.  When launching the installer as a non-Administrator account you will be asked for Administrator credentials in order to install.Gotcha with Virtual Box PathsI mounted the install software as a VirtualBox shared Folder and told it to auto-mount.  Unfortunately this auto-mount in Windows only applied to the current user, so when the software tried to run as administrator it couldn’t find the path.  The solution to this was to launch the installer using a UNC path “\\vboxsrv\<SHARE_NAME>\<PATH_TO_INSTALL_FILES>” because the mount point is available to all users, but the auto-mapping is only done at login time for the current user.Database Install OptionsWhen installing the database I made the following choices to make life easier later, in particular I made sure that I had a UTF-8 character set as recommended for SOA Suite.Declined Security Updates (this is not a production machine) Created and Configured a Database during install Chose Server Class database to get character set options later Chose single instance database installation Chose advanced install to get character set options later Chose English as my language Chose Enterprise Edition Included Oracle Data Extensions for .NET Set Oracle Base as C:\app\oracle Selected General Purpose/Transaction Processing Changed default database name Configuration Options Accepted default memory management settings Change character set to be AL32UTF8 as required by SOA Suite Unselected “Assert all new security settings” to relax security as this is not a production system Chose to create sample schemas Used database control for database management Use file system for database storage Didn’t enable automated backups (that is what Virtual Box snapshots are for) Used same non-compliant password for all accounts I set up the environment variable ORACLE_UNQNAME to be the database name, this is provided on the last screen of the Oracle database Configuration Assistant.Configuring for Virtual BoxBecause Virtual Box port forwarding settings are global I changed the DB console listen port (from 1158 using emca) and the database listener port (from 1521 using EM console) before setting up port forwarding for virtual box to the new ports.  This required me to re-register the database with the listener and to reconfigure EM.Firewall RestrictionsAfter changing my ports I had a final task to do before snapshotting my image, I had add a new Windows Firewall rule to open up database ports (EM & listener).Installing WebLogic ServerWith a working database I was now able to install WebLogic Server.  I decided to do a 32-bit install to simplify the process (no need for a separate JDK install).  As this was intended to be an all in one machine (developer and server) I accepted the Coherence (needed for SOA Suite) and OEPE (needed for OSB design time tooling) options.  After installing I set the oracle user to have full access permissions on the Middleware home I created in C:\app\oracle\product\FMW.Installing SOA/BPM SuiteBecause I was using a 32-bit JVM I had to provide the “–jreLoc” option to the setup.exe command in order to run the SOA Suite installer (see release notes).  The installer correctly found my Middleware Home and installed the SOA/BPM Suite.  After installing I set the oracle user to have full access to the new SOA home created in C:\app\oracle\product\FMW\Oracle_SOA and the Oracle common directory (C:\app\oracle\product\FMW\oracle_common).Running Repository Creation UtilityI ran the RCU from my host OS rather than from within the Windows guest OS.  This helps avoid any unnecessary temporary files being created in the virtual machine.  I selected the SOA and BPM Infrastructure component and left the prefix at the default DEV.  Using DEV makes life easier when you come to create a SOA/BPM doamin because you don’t need to change the username in the domain config wizard.  Because this isn’t a production environment I also set all the passwords to be the same, again this will simplify things in the config wizard.Adding BPM Feature PackWith SOA installed I updated it to include the BPM feature pack.Installing OPatch UpdateFirst I needed to apply patch 6880880 to get the latest OPatch.  The patch can be applied to any Oracle home and I chose to apply it to the oracle_common home, it seemed to make more sense there rather than the Oracle_SOA home.  To apply the patch I moved the original OPatch directory to OPatch.orig and then unzipped the patch in the oracle_common directory which created a new OPatch directory for me.  Before applying the feature set patch I opened a command prompt and set the ORACLE_HOME environment variable to the Oracle_SOA home and added the new OPatch directory to the path.  I then tested the new OPatch by running the command “opatch lsinventory” which showed me the SOA Suite install version.Fixing a Path ProblemOPatch uses  setupCCR.exe which has a dependency on msvc71.dll.  Unfortunately this DLL is not on the path so by default the call to setupCCR fails with an error “This application failed to start because MSVCR71.dll was not found”.  To fix this I found a helpful blog entry that led me to create a new key in the registry at “HKEY_LOCAL_MACHINE\SOFTWARE\Microsfot\Windows\CurrentVersion\App Paths\setupCCR.exe” with the default value set to “<MW_HOME>\utils\ccr\bin\setupCCR.exe”.  I added a String value to this key with a name of “Path” and a value of “<Oracle_Common_Home>\oui\lib\win32”.  This registers the setupCCR application with Windows and adds a custom path entry for this application so that it can find the MSVCR71 DLL.Patching oracle_common HomeI then applied the BPM feature pack patch to oracle_common bySetting ORACLE_HOME environment variable to the oracle_common directory Creating a temporary directory “PATCH_TOP” Unzipping the following files from the patch into PATCH_TOP p12413651_ORACOMMON_111150_Generic.zip p12319055_111150_Generic.zip p12614083_111150_Generic.zip From the PATCH_TOP directory run the command “<Oracle_Common_Home>\OPatch\opatch napply” Note I didn’t provide the inventory pointer parameter (invPtrLoc) because I had a global inventory that was found just fine by OPatch and I didn’t have a local inventory as the patch readme seems to expect. Deleting the PATCH_TOP directory After successful completion of this “opatch lsinventory” showed that 3 patches had been applied to the oracle_common home.Patching Oracle_SOA HomeI applied the BPM feature pack patch to Oracle_SOA bySetting ORACLE_HOME environment variable to the Oracle_SOA directory Creating a temporary directory “PATCH_TOP” Unzipping the following file from the patch into PATCH_TOP p12413651_SOA_111150_Generic.zip From the PATCH_TOP directory run the command “<Oracle_Common_Home>\OPatch\opatch napply” Note again I didn’t provide the inventory pointer parameter (invPtrLoc) because I had a global inventory that was found just fine by OPatch and I didn’t have a local inventory as the patch readme seems to expect. Deleting the PATCH_TOP directory After successful completion of this “opatch lsinventory” showed that 1 patch had been applied to the Oracle_SOA home.Updating the Database Schemas Having updated the software I needed to update the database schemas which I did as follows:Setting ORACLE_HOME environment variable to the Oracle_SOA directory Setting Java_HOME to <MW_HOME>\jdk160_24 Running “psa -dbType Oracle -dbConnectString  //<DBHostname>:<ListenerPort>/<DBServiceName> -dbaUserName sys –schemaUserName DEV_SOAINFRA Note that again I elided the invLocPtr parameter Because I had not yet created a domain I didn’t have to follow the post installation steps outlined in the Post-Installation Instructions.Creating a BPM Development DomainI wanted to create a development domain.  So I ran config from <Oracle_Common_Home>\common\bin selecting the following:Create a New Domain Domain Sources Oracle BPM Suite for developers – 11.1.1.0 This will give me an Admin server with BPM deployed in it. Oracle Enterprise Manager – 11.1.1.0 Oracle Business Activity Monitoring – 11.1.1.0 Adds a managed BAM server. I changed the domain name and set the location of the domains and applications directories to be under C:\app\oracle\MWConfig This removes the domain config from the user_projects directory and keeps it separate from the installed software. Chose Development Mode and Sun JDK rather than JRockit Selected all Schema and set password, service name, host name and port. Note when testing the test for SOA Infra will fail because it is looking for version 11.1.1.5.0 but the BPM feature pack incremented it to 11.1.1.5.1.  If the reason for the failure is “no rows were returned from the test SQL statement” then you can continue and select OK when warned that “The JDBC configuration test did not fully complete”.  This is covered in Oracle Support note 1386179.1. Selected Optional Configuration for Administration Server and Managed Servers so that I could change the listening ports. Set Admin Server Listen port to 7011 to avoid clashes with other Admin Servers in other guest OS. Set bam_server Listen port to 9011 to avoid clashes with other managed servers in other guest OS. Changed the name of the LocalMachine to reflect hostname of machine I was installing on. Changed the node manager listen port to 5566 to avoid clashes with other Node Managers in other guest OS. Having created my domain I then created a boot.properties file for the bam_server.Configuring Node ManagerWith the domain created I set up Node Manager to use start scripts by running setNMProps.cmd from <oracle_common>\common\bin.I then edited the <MW_Home>\wlserver_10.3\common\nodemanager\nodemanager.properties file and added the following property:ListenPort=5566 Firewall Policy UpdatesI had to add the Admin Server, BAM Server and Node Manager ports to the Windows firewall policy to allow access to those ports from outside the Windows server.Set Node Manager to Start as a Windows ServiceI wanted node manager to automatically run on the machine as a Windows service so I first edited the <MW_HOME>\wlserver_10.3\server\bin\installNodeMgrSvc.cmd and changed the port to 5566.  Then I ran the command as Administrator to register the service.  The service is automatically registered for automatic startup.Set Admin Server to Start as a Windows ServiceI also wanted the Admin Server to run as a Windows service.  There is a blog entry about how to do this using the installSvc command but I found it much easier to use NSSM. To use this I did the following:Downloaded NSSM and put the 64-bit version in my MWConfig directory. Once you start using NSSM the Services you create will point to the location from which you ran NSSM so don’t move it after installing a service! Created a simple script to start the admin server and redirect its standard out and standard error to a log file (I redirected to the “%DOMAIN_HOME%\servers\AdminServer\logs\AdminServer.out” because this is the location that would be used if the AdminServer were started by the node manager. @REM Point to Domain Directory set DOMAIN_HOME=C:\app\oracle\MWConfig\domains\bp_domain @REM Point to Admin Server logs directory set LOGS_DIR=%DOMAIN_HOME%\servers\AdminServer\logs @REM Redirect WebLogic stdout and stderr set JAVA_OPTIONS=-Dweblogic.Stdout="%LOGS_DIR%\AdminServer.out" -Dweblogic.Stderr="%LOGS_DIR%\AdminServer.out" @REM Start Admin Server call %DOMAIN_HOME%\startWebLogic.cmdRegistered the script as a Windows service using NSSM nssm install “Oracle WebLogic AdminServer” “C:\app\oracle\MWConfig\startAdminServer.cmd” Note that when you redirect WebLogic stdout and stderr as I have done it does not get the first few lines of output, so test your script from the command line before registering it as a service.By default the AdminServer will be restarted if it fails, allowing you to bounce the Admin Server without having to log on to the Windows machine.Configuring for Virtual BoxHaving created the domain and configured Node Manager I enabled port forwarding in VirtualBox to expose the Admin Server (port 7011), BAM Server (port 9011) and the Node Manager (port 5566).Testing ItAll that is left is to start the node manager as a service, start the Admin server as a service, start the BAM server from the WebLogic console and make sure that things work as expected.  In this case all seemed fine.  When I shut down the machine and then restarted everything came up as expected!ConclusionThe steps above create a SOA/BPM installation running under Windows Server 2008 that is automatically started when Windows Server starts.  The log files can be accessed and read by a non-admin user so the status of the environment can be checked.  Additional managed servers can be started from the Admin console because we have node manager installed.  The database, database listener, database control, node manager and Admin Server all start up as Windows services when the server is started avoiding the need for an Administrator to start them.

Installing BPM Suite on Windows Server 2008 Domain Controller under VirtualBox It seems I am working with a number of customers for whom Windows is an important part of their infrastructure.  Security...

Development

Structure in a Flat World

Adding Structure to Flat XML DocumentsA friend recently was wondering how to convert a flat document structure to a more structured form.The type of flat structure is shown in the diagram below:The deptNo and deptName fields repeat for each employee in the department.This would be better represented as a structured format like the one shown below: Note that the department details are now represented once per department and employees appear in a sequence called emp.  This is a more natural representation and easier to manipulate elsewhere.So the question is, how do I get from the flat schema to the structured schema?The answer lies in the preceding-sibling and following-sibling XPath axis.To get just the first time a department appears we select all the entries that do not have the same deptNo earlier in the document using this XPath expression:<xsl:for-each select="/ns1:collection/ns1:entry[not(ns1:deptNo = preceding-sibling::ns1:entry/ns1:deptNo)]">Within the first occurrence of a department we then set a variable to hold the department number:<xsl:variable name="DeptNo" select="ns1:deptNo"/>Within the department we then put in the employee included in the current node.  We then select all the other entries that have the same department number and add their employee details by using the following XPath expression:<xsl:for-each select="following-sibling::ns1:entry[ns1:deptNo = $DeptNo]">A sample JDeveloper project to test this is available here.

Adding Structure to Flat XML Documents A friend recently was wondering how to convert a flat document structure to a more structured form. The type of flat structure is shown in the diagram below: The...

SOA Suite

Mapping the Java World Part I

How to Customise Java/XML Mapping in SOA Suite Part I Setting Up EclipseLink MOXyThe ChallengeDuring a recent POC, the customer asked us to integrate with several of their backend EJBs, which hosted legacy code talking to backend systems.  The EJB interfaces could not be changed, and had to be called a certain way to be successful.  The customer was looking to phase out another set of "integration EJBs", written over the last few years that orchestrated their core backend EJBs.  SOA Suite will allow them to drastically reduce their development time, and provide much better visibility into the processes as they run.  Given their past experiences (writing their integration EJBs was painful and long winded), one of the key requirements was to support their legacy EJBs without writing code in the SOA environment.  At first glance this seemed impossible, because we couldn't change anything on the EJB side.  In addition many of the parameters to the interfaces contained either incomplete annotations or methods that didn't follow the JavaBeans spec.  This was not previously a problem for them because all their EJBs ran in the same JVM and were making local Java callsWe decided to use a powerful yet obscure feature of SOA Suite to do the mapping.  Chapter 49.7 of the SOA Suite Developer's guide mentions this marriage between EclipseLink MOXy and SOA Suite, but it does more than advertised, and works outside of the Spring Framework components.  We decided to use this functionality to "fix" some of the things we had mapping issues with in the customer code.  Additionally, we used the framework to do other helpful tasks, such as changing namespaces, fixing arrays, and removing unnecessary mappings.In this article we'll cover the theory behind the use of this functionality, basic setup and usage, and several examples to get you started.BackgroundWhen we use an EJB Reference or a Spring component in SOA Suite we usually want to wire it to a non-Java resource.  When we do this JDeveloper uses JAXB to create an XML representation of the parameters and return values of the methods in the Java interface we are using.  In this article we will show how to override those mappings.  Overriding the default generation of mappings allows us to specify target namespaces, rationalize the structure of the data and remove unneeded properties in the Java classes.  Some things we may want to customize include:Specifying concrete implementations for abstract classes and interfaces in the interface This allows us to map to Java objects in the interface which cannot be instantiated directly.  For example often we have lists of abstract classes or interfaces, by  specifying the possible concrete implementations of these classes we can generate an XML schema that includes additional properties available only through the concrete classes. Hiding unwanted properties This allows us to remove properties that are not needed for our implementation, or not needed because they are convenience properties such as the length of an array or collection which can easily be derived from the underlying array or collection. Providing wrappers for arrays and collections The default mapping for an array or collection is to provide a list of repeating elements.  We can modify the mapping to provide a wrapper element that represents the whole array or collection, with the repeating elements appearing a level down inside this. Changing WSDL namespaces It is often necessary to change the namespaces in a generated WSDL to match a corporate standard or to avoid conflicts with other components that are being used. ApproachSOA Suite allows us to describe in XML how we want a Java interface to be mapped from Java objects into XML.  The file that does this is called an “Extended Mapping” (EXM) file.  When generating a WSDL and its associated XML Schema from a Java interface SOA Suite looks for an EXM file corresponding to the Java Interface being generated from.  Without this file the mapping will be the “default” generation, which simply attempts to take each field and method in the Java code and map it to an XML type in the resulting WSDL.  The EXM file is used to describe or clarify the mappings to XML and uses EclipseLink MOXy to provide an XML version of Java annotations.  This means that we can apply the equivalent of Java annotations to Java classes referenced from the interface, giving us complete control over how the XML is generated.  This is illustrated in the diagram which shows how the WSDL interface mapping depends on the JavaInterface of the EJB reference or Spring component being wired (obviously), but is modified by the EXM file which in turn may embed or reference an XML version of JAXB annotations (using EclipseLink MOXy).The mapping will automatically take advantage of any class annotations in the Java classes being mapped, but the XML descriptions can override or add to these annotations, allowing us fine grained control over our XML interface.  This allows for changes to be made without touching the underlying Java code.SetupUsing the mapper out of the box is fairly simple.   Suppose you set up an EJB Reference or Spring component inside your composite.  You'd like to call this from a Mediator or BPEL Process which expect to operate on a WSDL.  Simply drag a wire from the BPEL process or Mediator to the EJB or Spring component and you should see a WSDL generated for you, which contains the equivalent to the Java components business interface and all required types.  This happens for you, as the tool goes through the classes in your target.But what if you get a "Schema Generation Error" or if the generated WSDL isn't correct.  As discussed earlier there may be a  number of changes we need or want to make to the mapping.   In order to use an "Extended Mapping", or EXM file, we need to do the following:We need to register the EclipseLink MOXy schema with JDeveloper.  Under JDeveloper Tools->Preferences->XML Schemas we click Add… to register the schema as an XML extension. The schema is found in a jar file located at <JDEV_HOME>/modules/org.eclipse.persistence_1.1.0.0_2-1.jar! and the schema is inside this jhar at /xsd/eclipselink_oxm_2_1.xsd so the location we register is jar:file:/<JDEV_HOME>/modules/org.eclipse.persistence_1.1.0.0_2-1.jar!/xsd/eclipselink_oxm_2_1.xsd where <JDEV_HOME> is the location where you installed JDeveloper. NOTE: This will also work as OXM instead of XML.  But if you use an .oxm extension then in each project you use it you must add a rule to copy .oxm files from the source to the output directory when compiling. Change the order of the source paths in your SOA Project to have SCA-INF/src first.  This is done using the Project Properties…->Project Source Paths dialog.  All files related to the mapping will go here.  For example, <Project>/SCA-INF/src/com/customer/EXM_Mapping_EJB.exm, where com.customer is the associated java package. We will now use a wizard to generate a base mapping file. Launch the wizard New XML Document from XML Schema (File->New->All Technologies->General->XML Document from XML Schema). Specify a file with the name of the Java Interface and an .exm extension in a directory corresponding to the Java package of the interface under SCA-INF/src.  For example, if your EJB adapter defined soa.cookbook.QuoteInterface as the remote interface, then the directory should be <Project>/SCA-INF/src/soa/cookbook and so the full file path would be <Project>/SCA-INF/src/soa/cookbook/QuoteInterface.exm.  By using the exm extension we are able to Use Registered Schema which will automatically map to the correct schema so that future steps in the wizard will understand what we are doing. The weblogic-wsee-databinding schema should already be selected, select a root element of java-wsdl-mapping and generate to a Depth of 3.  This will give us a basic file to start working with. As recommended by Oracle, separate out the mappings per package, by using the toplink-oxm-file element.  This will allow you, per package, to define re-usable mapping files outside of the EXM file.  Since the EXM file and the embedded mappings have different XML root elements, defining them separately allows JDeveloper to provide validation and completion, a sample include is shown below: <?xml version="1.0" encoding="UTF-8" ?> <java-wsdl-mapping xmlns="http://xmlns.oracle.com/weblogic/weblogic-wsee-databinding">   <xml-schema-mapping>     <toplink-oxm-file file-path="./mappings.xml" java-package="soa.cookbook"/>   </xml-schema-mapping> </java-wsdl-mapping>Create an "OXM Mapping" file to store custom mappings.  As mentioned, these files are per package, separate from the EXM files, and re-usable.  We can use the "New XML Document from Schema" to create these as well.  In this case, they will have an XML or OXM extension, use the persistence registered schema (http://www.eclipse.org/eclipselink/xsds/persistence/oxm), and be stored relative to the EXM file.  That is, they can go in the same directory, or in other directories, as long as you refer to them by relative path from the EXM file. <?xml version="1.0" encoding="UTF-8" ?> <xml-bindings xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/oxm">   <!-- Set target Namespace via namespace attribute -->   <xml-schema namespace=http://cookbook.soa.mapping/javatypes               element-form-default="QUALIFIED"/> </xml-bindings>In the newly created OXM Mapping file, we can use completion and validation to ensure we follow the EclipseLink MOXy documentation.  For example, to declare that a field is to be transient and not show up in the WSDL mapping, do this: <?xml version="1.0" encoding="UTF-8" ?> <xml-bindings xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/oxm">   <!-- Set target Namespace via namespace attribute -->   <xml-schema namespace=http://cookbook.soa.mapping/javatypes               element-form-default="QUALIFIED"/>   <java-types>     <java-type name="soa.cookbook.QuoteRequest">       <java-attributes>         <!-- Can remove mappings by making them transient via xml-transient element -->         <xml-transient java-attribute="product"/>       </java-attributes>     </java-type>   </java-types> </xml-bindings>Once complete, delete any existing wires to the Java components and rewire.  You should notice the dialog box change to indicate that an extended mapping file was used. Note that an “extended mapping file” was used. As an iterative process, changing a mapping is quite easy.  Simply delete the wire, and JDeveloper will offer to delete the WSDL file, which you should do.  Make updates to the EXM and OXM Mapping files as required, and rewire.  Often this occurs several times during development, as there are multiple reasons to make changes.  We will cover some of these in our next entry. SummaryThe extended mapping file puts the SOA composite developer in control of his own destiny when it comes to mapping Java into XML.  It frees him from the tyranny of Java developer specified annotations embedded in Java source files and allows the SOA developer to customize the mapping for his own needs.  In this blog entry we have shown how to set up and use extended mapping in SOA Suite composites.  In the next entry we will show some of the power of this mapping.PatchesTo use this effectively you need to download the following patch from MetaLink:12984003 - SOA SUITE 11.1.1.5 - EJB ADAPTER NONBLOCKINGINVOKE FAILS WITH COMPLEX OBJECTS (Patch) Note that this patch includes a number of fixes that includes a performance fix for the Java/XML mapping in SOA Suite. Sample CodeThere is a sample application uploaded as EXMdemo.zip.  Unzip the provided file and open the EXMMappingApplication.jws in JDeveloper.  The application consists of two projects:EXMEJB This project contains an EJB and needs to be deployed before the other project.  This EJB provides an EJB wrapper around a POJO used in the Spring component in the EXMMapping project. EXMMapping This project contains a SOA composite which has a Spring component that is called from a Mediator, there is also an EJB reference.  The Mediator calls the Spring component or the EJB based on an input value.  Deploy this project after deploying the EJB project. Key files to examine are listed below:QuoteInterface.java in package soa.cookbook This is the same interface implemented by the Spring component and the EJB. It takes a QuoteRequest object as an input parameter and returns a QuoteResponse object. Quote.java_diagram in package soa.cookbook A UML class diagram showing the structure of the QuoteRequest and QuoteResponse objects. EXM_Mapping_EJB.exm in package soa.cookbook EXM mapping file for EJB. This is used to generate EXM_Mapping_EJB.wsdl file. QuoteInterface.exm in package soa.cookbook EXM mapping file for Spring component. This is used to generate QuoteInterface.wsdl file. mappings.xml in package soa.cookbook Contains mappings for QuoteRequest and QuoteResponse objects. Used by both EXM files (they both include it, showing how we can re-use mappings). We will cover the contents of this file in the next installment of this series. Sample request message<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">   <soap:Body xmlns:ns1="http://cookbook.soa.mapping/types">     <ns1:quoteRequest>       <ns1:products>         <ns1:product>Product Number 1</ns1:product>         <ns1:product>Product Number 2</ns1:product>         <ns1:product>Product Number 3</ns1:product>       </ns1:products>       <ns1:requiredDate>2011-09-30T18:00:00.000-06:00</ns1:requiredDate>       <!-- provider should be “EJB” or “Spring” to select the appropriate target -->       <ns1:provider>EJB</ns1:provider>     </ns1:quoteRequest>   </soap:Body> </soap:Envelope> Co-AuthorThis blog article was co-written with my colleague Andrew Gregory.  If this becomes a habit I will have to change the title of my blog!AcknowledgementsBlaise Doughan, the Eclipse MOXy lead was extremely patient and helpful as we worked our way through the different mappings.  We also had a lot of support from David Twelves, Chen Shih-Chang, Brian Volpi and Gigi Lee.  Finally thanks to Simone Geib and Naomi Klamen for helping to co-ordinate the different people involved in researching this article.

How to Customise Java/XML Mapping in SOA Suite Part I Setting Up EclipseLink MOXy The Challenge During a recent POC, the customer asked us to integrate with several of their backend EJBs, which...

Fusion Middleware

Coping with Failure

Handling Endpoint Failure in OSBRecently I was working on a POC and we had demonstrated stellar performance with OSB fronting a BPEL composite calling back end EJBs.  The final test was a failover test which tested killing an OSB and bringing it back online and then killing a SOA(BPEL) server and bringing it back online and finally killing a backend EJB server and bringing it back online.  All was going well until the BPEL failover test when for some reason OSB refused to mark the BPEL server as down.  Turns out we had forgotten to set a very important setting and so this entry outlines how to handle endpoint failure in OSB.Step 1 – Add Multiple End Points to Business ServiceThe first thing to do is create multiple end points for the business service, pointing to all available backends.  This is required for HTTP/SOAP bindings.  In theory if using a T3 protocol then a single cluster address is sufficient and load balancing will be taken care of by T3 smart proxies.  In this scenario though we will focus on HTTP/SOAP endpoints.Navigate to the Business Service->Configuration Details->Transport Configuration and add all your endpoint URIs.  Make sure that Retry Count is greater than 0 if you don’t want to pass failures back to the client.  In the example below I have set up links to three back end webs service instances.  Go to Last and Save the changes.Step 2 – Enable Offlining & Recovery of Endpoint URIsWhen a back end service instance fails we want to take it offline, meaning we want to remove it from the pool of instances to which OSB will route requests.  We do this by navigating to the Business Service->Operational Settings and selecting the Enable check box for Offline Endpoint URIs in the General Configuration section.  This causes OSB to stop routing requests to a backend that returns errors (if the transport setting Retry Application Errors is set) or fails to respond at all.Offlining the service is good because we won’t send any more requests to a broken endpoint, but we also want to add the endpoint again when it becomes available.  We do this by setting the Enable with Retry Interval in General Configuration to some non-zero value, such as 30 seconds.  Then every 30 seconds OSB will add the failed service endpoint back into the list of endpoints.  If the endpoint is still not ready to accept requests then it will error again and be removed again from the list.  In the example below I have set up a 30 second retry interval.  Remember to hit update and then commit all the session changes.Considerations on Retry CountA couple of things to be aware of on retry count.If you set retry count to greater than zero then endpoint failures will be transparent to OSB clients, other than the additional delay they experience.  However if the request is mutative (changes the backend) then there is no guarantee that the request might not have been executed but the endpoint failed before turning the result, in which case you submit the mutative operation twice.  If your back end service can’t cope with this then don’t set retries.If your back-end service can’t cope with retries then you can still get the benefit of transparent retries for non-mutative operations by creating two business services, one with retry enabled that handles non-mutative requests, and the other with retry set to zero that handles mutative requests.Considerations on Retry Interval for Offline EndpointsIf you set the retry interval to too small a value then it is very likely that your failed endpoint will not have recovered and so you will waste time on a request failing to contact that endpoint before failing over to a new endpoint, this will increase the client response time.  Work out what would be a typical unplanned outage time for a node (such as caused by a JVM failure and subsequent restart) and set the retry interval to be say half of this as a comprise between causing additional client response time delays and adding the endpoint back into the mix as soon as possible.ConclusionAlways remember to set the Operational Setting to Enable Offlining and then you won’t be surprised in a fail over test!

Handling Endpoint Failure in OSB Recently I was working on a POC and we had demonstrated stellar performance with OSB fronting a BPEL composite calling back end EJBs.  The final test was a failover...

Fusion Middleware

Moving Address

Managing IP Addresses with Node ManagerMoving house and changing address is always a problem, Auntie Matilda and the Mega Credit card company continue to send letters to the old address for years, which are dutifully forwarded by the new occupants.  Every few months the dear folks at the Bristol England Oracle office start to feel guilty about the amount of mail addressed to me, so they stick it in a FedEx envelope and send it out to the Colorado Springs Oracle office, where I open it and throw it all in the recycling bin.  So it is with some relief that I can reveal how easy it is to have node manager take care of all moving address requirements for your managed WebLogic servers.My colleague James Bayer pointed out to me last week that there have been some enhancements to Node Manager in the way it handled IP addresses.JustificationSome WebLogic managed servers need to be able to move from one machine to another, and when they move, so that everyone else can find them, we need to move their IP listening addresses at the same time.  This “Whole Server Migration”, sometimes referred to as “WSM” to deliberately cause confusion with “Web Services Manager”, can occur for a number of reasons:Allow recovery of XA transactions by the transaction manager in that WebLogic instance Allow recovery of messages in a JMS Queue managed by a JMS server in that WebLogic instance Allow migration of singleton services, for instance the BAM server in SOA Suite Early HistoryWe used to enable whole server migration in the following wayGrant sudo privileges to the “oracle” user on the ifconfig and arping commands (which are used by the wlsifcfg script called by Node Manager) by adding the following to the /etc/sudoers file: oracle ALL=NOPASSWD: /sbin/ifconfig, /sbin/arping Tell Node Manager which interface (in the example below the eth1 network interface) it is managing and what the netmask is for that interface (in our example an 8-bit sub-net) by adding the following to the nodemanager.properties file: Interface=eth1 NetMask=255.255.255.0 UseMACBroadcast=true Identify target machines for the cluster in WebLogic console Environment->Clusters->cluster_name->Migration Identify a target machine for the managed server in WebLogic console Environment->Servers->server_name Identify a failover machine or machines for the managed server in WebLogic console Environment->Servers->server_name->Migration Once configured like this when a managed server is started the Node Manager will first check that the interface is not in use and then bring up the IP address on the given interface before starting the Managed Server, the IP address is brought up on a sub-interface of the given interface such as eth0:1.  Similarly when the Managed Server is shutdown or fails the Node Manager will release the IP address if it allocated it.  When failover occurs the Node Manager again checks for IP usage before starting the managed server on the failover machine.A ProblemThe problem with this approach is what happens when you have a managed server listening on more than one address!  We can only provide one Interface and even if multiple interface properties were allowed (they are not) we would not know which NetMask to apply.So for many years we have labored under the inability to have a server support both multiple listening addresses (channels in WebLogic parlance) and whole server migration – until now.SolutionThe latest Node Manager comes with a different syntax for managing IP addresses.  Instead of an Interface and NetMask property we now have a new property corresponding to the name of an interface on our computer.  For this interface we identify the range of IP addresses we manage on that interface and the NetMask associated with that address range.  This allows us to have multiple listening addresses in our managed server listening on different interfaces.  An example of adding support for multiple listening addresses on two interfaces (bond0 and eth0) is shown below:bond0=10.1.1.128-10.1.1.254,NetMask=255.255.255.0etho=10.2.3.10-10.2.3.127,NetMask=255.255.240.0UseMACBroadcast=trueSo now when a server has multiple listen channels then the Node Manager will check what address range each listen channel falls into and start the appropriate interface.A Practical ApplicationA practical example of where this is useful is when we have an ExaLogic machine with an external Web Server or load balancer.  We want the managed server to talk to each other using the internal Infiband network, but we want to access the managed servers externally using the 10GigaBit ethernet network.  With the enhancements to the Node Manager we can do just this.AcknowledgementsThanks to James Bayer, Will Howery and Teju for making me aware of this functionality on a recent ExaLogic training course.  Thanks to them my managed servers can now move multiple addresses without the constant forwarding of mail that I get from the Bristol office.  Now if they can just get the Bristol office to stop forwarding my mail…ExaLogic Documentation on Server Migration

Managing IP Addresses with Node Manager Moving house and changing address is always a problem, Auntie Matilda and the Mega Credit card company continue to send letters to the old address for...

SOA Suite

Begin the Clone Wars Have!

Creating a New Virtual Machine from an Existing Virtual DiskIn previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image.What are my Options?There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include:Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short!#2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need.#3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow.Snapshot, What Snapshot?As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file).When we create the new machines we will point them at the snapshot filename we have just checked.Network or NotWork?Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world.To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration.The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that.Creating the New VMsNow that I had collected the data that I needed I went ahead and created the new VMs.When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine.After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP.We are now ready to start the VMs and reconfigure Linux.Reconfiguring LinuxBecause I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings.Changing the HostnameI renamed both hosts by running the hostname command as root:hostname vboxfmw.oracle.comI also edited the /etc/sysconfig file and set the correct hostname in there.HOSTNAME=vboxfmw.oracle.comChanging the Network SettingsI needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters.Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment.Updating the Hosts FileHaving renamed the machines and reconfigured the network I then updated the /etc/hosts file torefer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6To make sure everything takes effect I restarted the server.Reconfiguring the Database on the DB MachineBecause we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname.I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:      (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521))After changing the listener.ora I was able to start the listener using:lsnrctl startI also had to reconfigure the EM database control.  I first deconfigured it using the command:emca -deconfig dbcontrol db -repos dropThis drops the repository and removes any existing registered dbcontrols.I then re-configured it using the following command:emca -config dbcontrol db -repos createThis creates the EM repository and then configures and starts dbcontrol.Now my database machine is ready so I can close it down and take a snapshot.Disabling the Database on the FMW MachineI set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command:chkconfig –del dboraNote that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot.Creating a New DomainThe FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines.Gotchas in SnapshottingVirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration.The Sky’s the LimitWe have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if...

Development

Building Your Own Path

Custom XPath in SOA Suite 11gUsually between standard XPath functions, BPEL extension functions and Oracle extension functions you have more than enough functionality, but sometimes you will need a little bit more.  In this entry I will show how to add a new custom XPath function, in this case we will add a function to return the current system time in millis as a string.To implement a custom XPath function we need to implement a particular interface in Java and provide a descriptor file to tell SOA how the custom Java code is mapped to an XPath function.The Custom XPath FunctionI created a Java project in JDeveloper and added the “SOA Runtime” library to the project because that includes the interface I need to implement.  I then created a class and implemented the functionality I needed as a static Java method.  The XSLT processor needs the custom XPath function to be implemented as a static Java method with a particular signature as shown below:public static Object getCurrentTimeMillis(                               IXPathContext iXPathContext,                               List list                               ) throws XPathFunctionException {   return String.valueOf(System.currentTimeMillis()); }The name of the method can be anything but it must be static and take the given parameters.  The list parameter is a list of parameters passed to the XPath function.  In this case we are not using any parameters.  The iXPathContext provides access to useful functions to convert namespace prefixes to URIs and access the type of calling component and variables.The rest of SOA Suite requires the custom XPath expression to be in a class that implements a specific interface - oracle.fabric.common.xml.xpath.IXPathFunction.  I implemented this interface by returning the result from my static function implemented for XSLT as shown below:public Object call(                    IXPathContext iXPathContext,                    List list                    ) throws XPathFunctionException {   return getCurrentTimeMillis(iXPathContext, list); }Note that the method name must be “call” and non-static because we are implementing an interface.  Otherwise the signature is the same as the one we implemented for the XSLT processor and so we can just call that static method.We can now compile the Java class file to verify that all is well.The Descriptor FileWe now need to create a descriptor to describe to SOA Suite and the XSLT processor what custom function we have created.  We do this by creating a file called “ext-soa-xpath-functions-config.xml” in the “META-INF” directory which has the following content:<?xml version="1.0" encoding="UTF-8"?> <soa-xpath-functions   xmlns="http://xmlns.oracle.com/soa/config/xpath" xmlns:blog="http://www.oracle.com/XSL/Transform/java/antony.blog.XPathGetCurrentTimeMillis">   <function name="blog:getCurrentTimeMillis">     <className>antony.blog.XPathGetCurrentTimeMillis</className>     <return type="string"/>     <desc>returns the current system time in milliseconds</desc>     <detail>Returns time in milliseconds as reported by             java.lang.System.currentTimeMillis().             It is returned as a string to keep full fidelity.</detail>   </function> </soa-xpath-functions>The namespace for the function must be “http://www.oracle.com/XSL/Transform/java/” followed by the full classname of the impementing class, in my case “antony.blog.XPathGetCurrentTimeMillis”.  This is required by the XSLT processor.  The namespace prefix can be anything you like, I chose “blog”.The function name can be anything but it must have a namespace prefix.  The className is the implementing class “antony.blog.XPathGetCurrentTimeMillis”.Valid “type” values for “return” are shown in the table below:Configuration TypeJava Method Return Typesstringjava.lang.Stringbooleanbooleannumberint, float or doublenode-setoracle.xml.parser.v2.XMLNodeListtreeoracle.xml.parser.v2.XMLFinally we can provide a brief description in “desc” and a more expansive explanation in “detail”.Creating a JAR FileThe custom XPath function is now complete and we need to package it up as a jar file.  I used the JDeveloper project property Deployment to create a new jar file deployment profile.  I added the source path to the “Contributers” to include the META-INF directory and then in the “Filters” I filtered out the “.java” file.  Our custom XPath function is now packaged and ready to register.Registering with JDeveloperBefore using the custom function in JDeveloper it needs to be registered.  To do this in JDeveloper we go to the “SOA” section in “Preferences” and add the newly created jar file.  To make JDeveloper see the file we then need to restart JDeveloper.  After doing this we will see the XPath function appears in the “User Defined Extension Functions” in the expression builder.Registering with SOA SuiteTo use the custom function in SOA Suite we need to add it to the SOA extension libraries.  We do this by copying the jar file to the $ORACLE_SOA_HOME/soa/modules/oracle.soa.ext_11.1.1 directory on the SOA Suite installation.  We then run ant in that directory (setting JAVA_HOME if necessary) by executing $FMW_HOME/modules/org.apache.ant_1.7.1/bin/ant.[oracle@soa.ext_11.1.1]$ ../../../../modules/org.apache.ant_1.7.1/bin/ant Buildfile: build.xmlcreate-manifest-jar:      [echo] Creating oracle.soa.ext at …/FMW/Oracle_SOA/soa/modules/oracle.soa.ext_11.1.1/oracle.soa.ext.jar…BUILD SUCCESSFUL Total time: 1 secondWe then need to restart the SOA Suite to get it to recognise the change.TestingFinally we can create a test composite and deploy it to the SOA Server to test our new XPath functions behave as we expect.  I created a composite that first does an assign from the new functoin in a mediator and then passes the result onto a BPEL 1.1 process that also does an assign, followed a call to a BPEL 2.0 process that does an assign followed by an XSL transformation.  Finally the result is returned so that I can verify everything is fine.Loose EndsWe have not examined how we can pass parameters to an XPath function (using the “params” element in the descriptor, nor have we looked at how to handle XML results.  Those are subjects for another blog entry.  It is also possible to register the XPath function with just a subset of components by using different descriptor files.  However in most cases I suspect that making the function available in all components and XSLT is what is wanted.So go and enjoy creating some custom XPath functions, your only problem is finding functionality that is not already available in XPath!ReferencesSample code used above is available here as 2 JDeveloper projects.The Oracle documentation covers this materialCreating User-Defined XPath Extension Functions Importing User-Defined Functions

Custom XPath in SOA Suite 11g Usually between standard XPath functions, BPEL extension functions and Oracle extension functions you have more than enough functionality, but sometimes you will need a...

SOA Suite

Installing the Latest & Greatest

Running SOA Suite 11.1.1.5 on OEL 6In my last post I set up Oracle 11gR2 on an Oracle Enterprise Linux 6 system hosted in VirtualBox.  In this post I will install SOA Suite onto the same box.This will give me a complete SOA environment in a single virtual image that has all the latest software, OS, database, Java, App Server and SOA.  Because I am running a 64-bit Linux I will install 64-bit JVMs (multiple for choice and testing) and a generic WebLogic install.Specifically I will installJDK 1.7 JDK 1.6 JRockit 1.6 WebLogic 10.3.5 OSB 11.1.1.5 SOA Suite 11.1.1.5 I will also run the Repository Creation Utility to create the required schemas in the database.Additional OS PreparationMost of the OS configuration was done in preparing it for the database install but I needed to add two more packages in order to run the RCU.  These packages are:libXext.i686 libXtst.i686 I added them by using yum:yum install libXext.i686 libXtst.i686Creating SOA RepositoryHaving added the required packages I ran the RCU and installed into my 11gR2 database.  The RCU doesn’t check the client it is executing on, only that the target database server is certified so there were no problems with running the RCU.Installing JavaBecause I am on a 64-bit system I don’t want to use the 32-bit JVMs bundled in the WebLogic install so I started off by installing the JVMs I wanted to use.Before installing the JVMs I created a Middleware Home directory because I wanted my JVMs to be in the same middleware home as the rest of my installation.  My database installation had created an ORACLE_BASE at “/home/oracle/app/oracle” and an ORACLE_HOME at “$ORACLE_BASE/product/11.2.0/db”.  I decided to use the same ORACLE_BASE and put my MW_HOME at “$ORACLE_BASE/product/FMW” so that my middleware installs would mirror the structure of my database install.I then changed to that directory and installed the 1.6 HotSpot JVM, the JRockit JVM for Java 6 and the JDK 1.7 JVM.  To make it easier to upgrade in the future I then created symbolic links for the JVMs:ln –s jrockit-jdk1.6.0_24-R28.1.3-4.0.1 jrockit-jdk1.6ln -s jdk1.7.0 jdk1.7ln -s jdk1.6.0_25 jdk1.6Using symbolic links allows me to install a later version of a JDK and start using it by updating the link.Installing WebLogicHaving installed the JVMs I was then ready to run the WebLogic installer.  Because I was using a 64-bit JVM I had to run the generic installer which is delivered as an executable jar file.  I added all three of my JVMs to the list of JVMs in WebLogic and made the 1.6 HotSpot JVM the default.  Note that WebLogic identifies the JVMs and follows symbolic links to get to the actual directory.  The 1.7 JDK is identified as an Oracle rather than a Sun JDK because Oracle now own Sun.  Normally if the vendor is Oracle then WebLogic assumes that this is a JRockit JVM.  This will cause a problem later because HotSpot has different parameters to JRockit.After installing WebLogic I edited $MW_HOME/wlserver_10.3/common/bin/commEnv.sh to refer to the synbolic link for the JVM rather than the actual directory.  This will make it easier in the future to upgrade the JVM.JAVA_HOME="/home/oracle/app/oracle/product/FMW/jdk1.6"Installing OSBBoth OSB and SOA Suite use the Oracle Universal Installer to check the platform, required OS parameters and packages of the target system.  Unfortunately OEL 6 is not on the list of expected OS platforms and so the installer fails it pre-requisite checks.  In the DB installer we could choose to ignore failed checks from the GUI.  With the DB and OSB installers you can’t.  So we have two options, ignore the pre-requisite checks or add new ones.  Whichever choice you go with the install of OSB is now straightforward.  I didn’t install the designer because it is currently only built for a 32-bit system.  I will install the designer on a client machine rather than on my server VM.Option 1 – Ignore Pre-ReqsYou can ignore the installer pre-req check by passing the –ignoreSysPrereqs to the installer./runInstaller –ignoreSysPrereqsThis has the advantage that it is quick and easy but of course you may actually fail a check because something really is needed and you would never know it.  So that leads us to option 2.Option 2 – Create New Check for OEL 6The pre-req checks are held in a file “Disk1/stage/prereq/linux64/refhost.xml” on the installation image.  I added a new <OPERATING-SYSTEM> tag to the <CERTIFIED-SYSTEMS>.  I copied the redhat 5.4 tag and changed the “<VERSION>” “VALUE” attribute to “6.0” and changed the “i386” “ARCHITECTURE” attributes in “<PACKAGE>” elements to “i686”:    <OPERATING_SYSTEM>       <VERSION VALUE="6.0"/>       <ARCHITECTURE VALUE="x86"/>       <NAME VALUE="Linux"/>       <VENDOR VALUE="redhat"/>       <GLIBC ATLEAST="2.5-12">       </GLIBC>       <PACKAGES>               <PACKAGE NAME="binutils" VERSION="2.17.50.0.6" />           <PACKAGE NAME="compat-libstdc++-33" VERSION="3.2.3" ARCHITECTURE="x86_64" />           <PACKAGE NAME="compat-libstdc++-33" VERSION="3.2.3" ARCHITECTURE="i686" />               <PACKAGE NAME="elfutils-libelf" VERSION="0.125" />               <PACKAGE NAME="elfutils-libelf-devel" VERSION="0.125" />               <PACKAGE NAME="gcc" VERSION="4.1.1" />               <PACKAGE NAME="gcc-c++" VERSION="4.1.1" />               <PACKAGE NAME="glibc" VERSION="2.5-12" ARCHITECTURE="x86_64" />               <PACKAGE NAME="glibc" VERSION="2.5-12" ARCHITECTURE="i686" />               <PACKAGE NAME="glibc-common" VERSION="2.5" />               <PACKAGE NAME="glibc-devel" VERSION="2.5" ARCHITECTURE="x86_64" />           <PACKAGE NAME="glibc-devel" VERSION="2.5-12" ARCHITECTURE="i686" />           <PACKAGE NAME="libaio" VERSION="0.3.106" ARCHITECTURE="x86_64" />           <PACKAGE NAME="libaio" VERSION="0.3.106" ARCHITECTURE="i686" />               <PACKAGE NAME="libaio-devel" VERSION="0.3.106" />           <PACKAGE NAME="libgcc" VERSION="4.1.1" ARCHITECTURE="x86_64" />           <PACKAGE NAME="libgcc" VERSION="4.1.1" ARCHITECTURE="i686" />           <PACKAGE NAME="libstdc++" VERSION="4.1.1" ARCHITECTURE="x86_64" />           <PACKAGE NAME="libstdc++" VERSION="4.1.1" ARCHITECTURE="i686" />           <PACKAGE NAME="libstdc++-devel" VERSION="4.1.1" />           <PACKAGE NAME="make" VERSION="3.81" />           <PACKAGE NAME="sysstat" VERSION="7.0.0" />       </PACKAGES>       <KERNEL> <!--               <PROPERTY NAME="semmsl" NAME2="semmsl2" VALUE="250" />               <PROPERTY NAME="semmns" VALUE="32000" />               <PROPERTY NAME="semopm" VALUE="100" />               <PROPERTY NAME="semmni" VALUE="128" />               <PROPERTY NAME="shmmax" VALUE="536870912" />               <PROPERTY NAME="shmmni" VALUE="4096" />               <PROPERTY NAME="shmall" VALUE="2097152" />               <PROPERTY NAME="file-max" VALUE="65536" />               <PROPERTY NAME="ip_local_port_range" ATLEAST="1024" ATMOST="65000" />               <PROPERTY NAME="rmem_default" VALUE="4194304" />               <PROPERTY NAME="rmem_max" VALUE="4194304" />               <PROPERTY NAME="wmem_default" VALUE="262144" />               <PROPERTY NAME="wmem_max" VALUE="262144" /> -->               <PROPERTY NAME="VERSION" VALUE="2.6.18"/>               <PROPERTY NAME="hardnofiles" VALUE="4096"/>               <PROPERTY NAME="softnofiles" VALUE="4096"/>       </KERNEL>     </OPERATING_SYSTEM>This allows you to run the installer as normal and it will validate the pre-reqs correctly giving you confidence that things are set up correctly in the OS.Installing SOA SuiteAfter installing OSB I installed SOA Suite using the same work around for the OS pre-req mismatch as we used for OSB.  At this point I took another snapshot so that I could build different types of domain from the same repository.Creating a Development DomainMy next step was to create a domain suitable for development with SOA Suite and Service Bus.  I chose the Developer OSB and Developer SOA Suite templates from the domain configuration wizard along with the Enterprise Manager template.  This creates a single server, the AdminServer, that has all the SOA and Service Bus components.  This works well for development environments as it minimizes the memory footprint required by putting everything into a single server.  After creating the domain and starting the AdminServer I fired up jconsole to check the memory usage of the JVM.  I saw that it had committed 1.3GB out of a maximum 1.5GB.  I was surprised to notice that the PermGen was using almost 500MB.  The initial default memory size is 750MB and the maximum is 1.5GB so I changed the memory settings by editing setSOADomainEnv.sh.  I increased the initial memory size to 1.5GB and set the maximum memory size to be 2.5GB by setting the PORT_MEM_ARGS in setSOADomainEnv.sh.  I also modified the initial PermGen size to be 512MB and the maximum PermGen size to be 750MB to ensure I didn’t run out of space for code.PORT_MEM_ARGS=”-Xms1536m –Xmx2560m” PORT_MEM_ARGS=”${PORT_MEM_ARGS} -XX:PermSize=512m -XX:MaxPermSize=768m"Note I used PORT_MEM_ARGS because I was using a 64-bit Linux system.  If it was a 32-bit system then I would have set the DEFAULT_MEM_ARGS.I then took a snapshot of my virtual image so I can jump straight to a ready to use domain when I need it.Creating a Production DomainHaving created a development domain that ran in a single server I also wanted a production style domain that had everything I would need for SOA.  I chose the BPM Suite, SOA Suite, Enterprise Manager, Service Bus and Business Activity Monitoring templates for this domain.  I chose production mode for my domain to make it better reflect what customers would have in their production environments.  I accept all the other defaults and created the domain.  This creates separate managed servers for SOA/BPM, BAM and OSB.  Consoles still run in the AdminServer.  Before starting I created a boot.properties file with the following properties:username=weblogic password=welcome1I created the following directories under my domain directory (using “mkdir –p <dirname>”) and put a boot.properties file in each of them.servers/AdminServer/security servers/soa_server1/security servers/bam_server1/security servers/osb_server1/security I then started the admin server.  While it was starting I ran the “$MW_HOME/oracle_common./common/bin/setNMProps.sh” script to enable Node Manager to start the managed servers.  I then started the Node Manager.With the node manager running I was able to check the consoles were working fine and then I noticed that the OSB server was not targeted at a machine and so could not be started by node manager.  I targeted “osb_server1” at the default “LocalMachine” and then started the managed servers.After verifying that all was well with the managed servers I shutdown the machine and took a snapshot.A Solid BaseI use this kind of VM configuration all the time because I can always create a new domain from the base software to match my exact requirements.  But most of the time I can use one of the existing domains.  Although it is quite heavy on disk it is more efficient than having multiple images.  Hope you find this useful.

Running SOA Suite 11.1.1.5 on OEL 6 In my last post I set up Oracle 11gR2 on an Oracle Enterprise Linux 6 system hosted in VirtualBox.  In this post I will install SOA Suite onto the same box. This will...

Development

A Database Detour

Running Oracle Database 11gR2 on OEL 6Recently I decided to build a new VirtualBox environment and base it on the latest Linux release (Oracle Enterprise Linux 6).  After installing OEL 64-bit my next task was to install a database, I decided to go with the latest 11g database even though it is not yet certified on OEL 6.OS PreparationBefore starting the install I needed to make sure I had the correct OS configuration.  The Oracle® Database Quick Installation Guide 11g Release 2 (11.2) for Linux x86-64 provides the requirements for the OS.PackagesSection 4.3 Package Requirements of the install guide details the package requirements.  Note that the installer will incorrectly identify the required packages because in OEL 6 the package suffix for 32-bit is i686 rather than i386.Note that when I installed there was no oracle-verified package for OEL 6.I had setup yum to point to the OEL 6 repository and so I was able to use yum to install the required packages.  I ran the following command to install the required packages:yum install binutils compat-libstdc++-33 compat-libstdc++-33.i686         elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc glibc.i686         glibc-common glibc-devel glibc-devel.i686 glibc-headers ksh libaio         libaio.i686 libaio-devel libaio-devel.i686 libgcc libgcc.i686 libstdc++         libstdc++.i686 libstdc++-devel make numactl-devel sysstat I also added the UnixODBC packages as detailed in  section 4.5.1 Oracle ODBC Drivers by using the following command:yum install unixODBC unixODBC-devel unixODBC.i686 unixODBC-devel.i686Users & GroupsIn section 5 Creating Required Operating System Groups and Users Oracle recommend creating dba and oinstall groups to correspond to different DBA roles so I ran the following commands to create the groups and assign the already created oracle user to those groups:groupadd oinstall groupadd dba usermod –g oinstall –G dba,oracle,vboxsf oracle I verified this was correct by running the “id oracle” command which givesuid=500(oracle) gid=502(oinstall) groups=502(oinstall),500(oracle),503(dba),501(vboxsf)The vboxsf group is added to allow the oracle user to mount shared folders under Virtual Box.Kernel ParametersI edited /etc/sysctl in accordance with section 6 Configuring Kernel Parameters to set the following kernel parameters:# Oracle Settings fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 10000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 I decreased the local port range to allow more ports for SOA Suite.  I made these effective immediately by running “sysctl –p”.Resource LimitsI altered the oracle user resource limits according to Check Resource Limits for the Oracle Software Installation Users by editing /etc/security/limits.conf as follows:# Oracle Settings oracle              soft    nproc   2047 oracle              hard    nproc   16384 oracle              soft    nofile  1024 oracle              hard    nofile  65536 oracle              soft    stack   10240 oracle              hard    stack   10240 Software InstallationHaving prepared the OS I could then install the database.  Before I launched the installer I set the ORACLE_HOSTNAME using the command:export ORACLE_HOSTNAME=`hostname`This helps the installer to know what hostname it should use for configuring the listener and EM.I then launched the installer and created and configured a “Desktop Class” database using the Unicode (AL32UTF8) character set which will be required by the SOA Suite repository creation utility.When checking for system pre-requisites the installer will complain about the following:Kernel Parameter ip_local_port_range because I reduced it for SOA Suite. Packages libaio, compat-libstdc++, libaio-devel, libgcc, libstdc++, unixODBC, unixODBC-devel because the suffix is now i686 rather than i386. pdksh because in OEL 6 the package was renamed to ksh. I chose to “Ignore All” errors and continued, this gave me a working database installation.Auto-StartupI am a Middleware guy and this image will probably be used by other Middleware folks.  We don’t know about databases so I set up the database to automatically start when the Linux OS was started.  I did this my setting the start parameter for the database to ‘Y’ in the /etc/oratab file so that the dbstart command would start my instance.orcl:/home/oracle/app/oracle/product/11.2.0/db:YI then created an init script dbora in the /etc/init.d directory to start and stop the database at OS startup and shutdown.  This script is based on the one in the 11gR1 documentation.#! /bin/sh –x #chkconfig: 2345 99 10 #description: Oracle 11g Database # # Oracle Home Location # ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/db # # Oracle User # ORACLE=oracle # # Oracle SID # ORACLE_UNQNAME=orcl export ORACLE_UNQNAME PATH=${PATH}:${ORACLE_HOME}/bin # Execute command as Oracle User if [ ! "$2" = "ORA_DB" ] ; then    su - ${ORACLE} –c “$0 $1 ORA_DB”    touch /var/lock/subsys/dbora    exit fi case $1 in 'start')         ${ORACLE_HOME}/bin/dbstart ${ORACLE_HOME} &         ${ORACLE_HOME}/bin/emctl start dbconsole &         ;; 'stop')         ${ORACLE_HOME}/bin/emctl stop dbconsole &         ${ORACLE_HOME}/bin/dbshut ${ORACLE_HOME} &         ;; *)         echo "usage: $0 {start|stop}"         ;; esac exit The script is invoked by passing in the start or stop as a parameter.To have it automatically invoked at startup and shutdown I used chkconfig as shown:chkconfig –add dboraThis causes the database, listener and database control to be started when the system enters runlevel 2,3,4 or 5 based on the chkconfig property in the script.Note that I start the database and EM in background tasks to improve startup performance.  But this does mean that the database might not have finished initialising when a user logs in.Mission CompleteWith the database installation complete I was able to take a snapshot of the VM ready to start building an 11.1.1.5 SOA installation.

Running Oracle Database 11gR2 on OEL 6 Recently I decided to build a new VirtualBox environment and base it on the latest Linux release (Oracle Enterprise Linux 6).  After installing OEL 64-bit my next...

SOA Suite

Gone With the Wind?

Where Have All the Composites Gone? I was just asked to help out with an interesting problem at a customer.  All their composites had disappeared from the EM console, none of them showed as loading in the log files and there was an ominous error message in the logs. Symptoms After a server restart the customer noticed that none of his composites were available, they didn’t show in the EM console and in the log files they saw this error message: SEVERE: WLSFabricKernelInitializer.getCompositeList Error during parsing and processing of deployed-composites.xml file This indicates some sort of problem when parsing the deployed-composites.xml file.  This is very bad because the deployed-composites.xml file is basically the table of contents that tells SOA Infrastructure what composites to load and where to find them in MDS.  If you can’t read this file you can’t load any composites and your SOA Server now has all the utility of a chocolate teapot. Verification We can look at the deployed-composites.xml file from MDS either by connecting JDeveloper to MDS, exporting the file using WLST or exporting the whole soa-infra MDS partition by using EM->SOA->soa-infra->Administration->MDS Configuration.  Exporting via EM is probably the easiest because it then prepares you to fix the problem later.  After exporting the partition to local storage on the SOA Server I then ran an XSLT transform across the file deployed-composites/deployed-composites.xml. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/1999/xhtml">     <xsl:output indent="yes"/>     <xsl:template match="/">         <testResult>             <composite-series>                 <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series)"/></xsl:attribute>                 <xsl:attribute name="nameAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series[@name])"/></xsl:attribute>                 <xsl:attribute name="defaultAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series[@default])"/></xsl:attribute>                 <composite-revision>                     <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision)"/></xsl:attribute>                     <xsl:attribute name="dnAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@dn])"/></xsl:attribute>                     <xsl:attribute name="stateAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@state])"/></xsl:attribute>                     <xsl:attribute name="modeAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@mode])"/></xsl:attribute>                     <xsl:attribute name="locationAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@location])"/></xsl:attribute>                     <composite>                         <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite)"/></xsl:attribute>                         <xsl:attribute name="dnAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite[@dn])"/></xsl:attribute>                         <xsl:attribute name="deployedTimeAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite[@deployedTime])"/></xsl:attribute>                     </composite>                 </composite-revision>                 <xsl:apply-templates select="deployed-composites/composite-series"/>             </composite-series>         </testResult>     </xsl:template>     <xsl:template match="composite-series">             <xsl:if test="not(@name) or not(@default) or composite-revision[not(@dn) or not(@state) or not(@mode) or not(@location)]">                 <ErrorNode>                     <xsl:attribute name="elementPos"><xsl:value-of select="position()"/></xsl:attribute>                     <xsl:copy-of select="."/>                 </ErrorNode>             </xsl:if>     </xsl:template> </xsl:stylesheet> The output from this is not pretty but it shows any <composite-series> tags that are missing expected attributes (name and default).  It also shows how many composites are in the file (111) and how many revisions of those composites (115). <?xml version="1.0" encoding="UTF-8"?> <testResult xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/1999/xhtml">    <composite-series elementCount="111" nameAttributeCount="110" defaultAttributeCount="110">       <composite-revision elementCount="115" dnAttributeCount="114" stateAttributeCount="115"                           modeAttributeCount="115"                           locationAttributeCount="114">          <composite elementCount="115" dnAttributeCount="114" deployedTimeAttributeCount="115"/>       </composite-revision>       <ErrorNode elementPos="82">          <composite-series xmlns="">             <composite-revision state="on" mode="active">                <composite deployedTime="2010-12-15T11:50:16.067+01:00"/>             </composite-revision>          </composite-series>       </ErrorNode>    </composite-series> </testResult> From this I could see that one of the <composite-series> elements (number 82 of 111) seemed to be corrupt. Having found the problem I now needed to fix it. Fixing the Problem The solution was really quite easy.  First for safeties sake I took a backup of the exported MDS partition.  I then edited the deployed-composites/deployed-composites.xml file to remove the offending <composite-series> tag. Finally I restarted the SOA domain and was rewarded by seeing that the deployed composites were now visible. Summary One possible cause of not being able to see deployed composites after a SOA 11g system restart is a corrupt deployed-composites.xml file.  Retrieving this file from MDS, repairing it, and replacing it back into MDS can solve the problem.  This still leaves the problem of how did this file become corrupt!

Where Have All the Composites Gone? I was just asked to help out with an interesting problem at a customer.  All their composites had disappeared from the EM console, none of them showed as loading in...

Using the SOA-BPM VIrtualBox Appliance

Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box.  Note that after installing VirtualBox I needed to add the VirtualBox Extension Pack to enable RDP access amongst other features.  In order to get them started quickly with some images I downloaded the pre-built appliance for SOA/BPM from OTN. Out of the box this provides a VirtualBox image that is pre-installed with everything you will need to develop SOA/BPM applications. Specifically by using the virtual appliance I got the following pre-installed and configured. Oracle Enterprise Linux 5 User oracle password oracle User root password oracle. Oracle Database XE Pre-configured with SOA/BPM repository. Set to auto-start on OS startup. Oracle SOA Suite 11g PS2 Configured with a “collapsed domain”, all services (SOA/BAM/EM) running in AdminServer. Listening on port 7001 Oracle BPM Suite 11g Configured in same domain as SOA Suite. Oracle JDeveloper 11g With SOA/BPM extensions. Networking The VM by default uses NAT (Network Address Translation) for network access.  Make sure that the advanced settings for port forwarding allow access through the host to guest ports.  It should be pre-configured to forward requests on the following ports Purpose Host Port Guest Port (VBox Image) SSH 2222 22 HTTP 7001 7001 Database 1521 1521 Note that only one VirtualBox image can use a given host port, so make sure you are not clashing if it seems not to work. What’s Left to Do? There is still some customization of the environment that may be required. If you need to configure a proxy server as I did then for the oracle and root users to set up an HTTP proxy Added “export http_proxy=http://proxy-host:proxy-port” to ~oracle/.bash_profile and ~root/.bash_profile Added “export http_proxy=http://proxy-host:proxy-port” to /etc/.bashrc Edited System->Preferences to set Network Proxy In Firefox set Preferences->Network->Connection Settings to “Use system proxy settings” In JDeveloper set Edit->Preferences->Web Browser and Proxy to required proxy settings You may need to configure yum to point to a public OEL yum repository – such as http://public-yum.oracle.com. If you are going to be accessing the SOA server from outside the VirtualBox image then you may want to set the soa-infra Server URLs to be the hostname of the host OS. Snap! Once I had the machine configured how I wanted to use it I took a snapshot so that I can always get back to the pristine install I have now.  Snapshots are one of the big benefits of putting a development environment into a virtualized environment.  I can make changes to my installation and if I mess it up I can restore the image to a last known good snapshot. Hey Presto!, Ready to Go This is the quickest way to get up and running with SOA/BPM Suite.  Out of the box the download will work, I only did extra customization so I could use services outside the firewall and browse outside the firewall from within by SOA VirtualBox image.  I also use yum to update the OS to the latest binaries. So have fun.

Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box. ...

Monitoring Undelivered Messages in BPEL in SOA 10g

In previous blogs I have discussed the use of the auto-recovery to re-submit asynchronous messages for delivery.  I am currently working with a client that wants to know how many undelivered messages they have, and if it reaches a certain threshold then they wants to alert the operator.  To do this they plan on using the Enterprise Manager alert functions, but first they needs to know how many undelivered instances are out there. Undelivered asynchronous messages are stored in the INVOKE_MESSAGE table with a RECEIVE_DATE timestamp marking when they were placed in the table and a STATE to indicate if they have not yet been processed (DELIVERY_PENDING 0).  So to query how many messages have not been processed after 10 minutes we can run the query select count(state) from invoke_message where state = '0' and receive_time < (current_timestamp - Interval '10' Minute) This counts “count(state)” the number of unprocessed messages “state = '0'” that have been waiting for more than 10 minutes “receive_time < (current_timestamp - Interval '10' Minute)” We want to allow a delay, in the example 10 minutes, to give the BPEL engine time to start processing the messages. We can use this query to check that our BPEL instance is processing messages. References Oracle SOA Suite Best Practices Guide 10.1.3.3 – Excellent explanation of async delivery messages in here and also discussion of invoke_message table. Threading in 10.1.3.4 – Blog entry I wrote explaining 10.1.3 threading Oracle BPEL PM – Dehydration Store? – Blog entry that describes database structures used by BPEL Invoke Message State Post on OTN Forum Description of the STATE Values for Tables in the BPEL Dehydration Store - Oracle Support Note

In previous blogs I have discussed the use of the auto-recovery to re-submit asynchronous messages for delivery.  I am currently working with a client that wants to know how many undelivered messages...

Installing an 11g SOA Cluster – Part VI Server Failover

Configuring a SOA 11g PS2 SOA Cluster – Part VI Automatic Server Migration In previous blog entries we built a SOA/BPM cluster with associated Web Services Manager and BAM clusters and got those working.  We now need to enable automatic migration of servers to ensure that no messages are lost or unnecessarily delayed. Creating the Server Migration Leasing Table First we need to create a migration leasing table.  This is used to keep track of which machines are hosting which managed servers. Creating the Leasing Tablespace We can put this into its own tablespace to keep it separate from the SOA schemas.  Using SQLPlus we create the new tablespace as shown below: create tablespace leasing        logging datafile '/u01/oradata/rac/leasing.dbf'        size 32m autoextend on next 32m maxsize 2048m         extent management local; This creates a tablespace leasing using the new file “leasing.dbf” in the same location as the other database files in our RAC cluster.  This file is set to an initial size of 32m and will grow in 32m increments until it reaches a limit of 2G. Creating a Leasing Schema We can now create a user and schema in this tablespace using SQLPlus as follows: grant create table, create session to leasing identified by welcome1; alter user leasing default tablespace leasing; alter user leasing quota unlimited on leasing; This creates the user leasing and sets his default tablespace to be leasing. Creating the Leasing Table We are now able to create our leasing table using the leasing.ddl script found in our WebLogic install at $WLS_HOME/server/db/oracle/920.  We can copy this to our database server using scp and then execute it using SQLPlus as the leasing user as shown: connect leasing/welcome1 @leasing.ddl This will create a single table in the leasing schema to hold managed server lease details. (EDG) Configuring WebLogic with a Leasing Data Source Now that we have a leasing table we need to configure WebLogic with a data source that can access this table.  This only needs to be able to support single phase commit because it will not be interacting with other transactional resources. We will create a Multi-Data source to take advantage of our RAC cluster. In WebLogic Console navigate to Services->JDBC->Multi Data Sources Acquire an update lock if necessary and then select New to start creating a new Multi Data Source Set the name to be “leasing” and the JNDI name to be “jdbc/leasing” then click Next. Target this data source to all servers in the SOA_Cluster and BAM_Cluster then click Next Choose the non-XA driver and click Next Create new data sources for each RAC database instance The name should be “leasing-rac-N” and the JNDI name “jdbc/leasing-rac-N” Database type is “oracle” and the driver should be “Oracle Driver (Thin) for RAC server-Instance connection Version 10,11” Deselect “Supports Global Transactions” as we don’t need them for this data source Service name can be “soaedg.soa.oracle.com”, database is “racN”, hostname is “racN” and username/password is “leasing/welcome1” Target the data source at all servers in the SOA_Cluster and BAM_Cluster Add the newly created data sources to the Chosen column and finish. Don’t forget to commit your changes! We now have our leasing data source available for use. (EDG and EDG) Configuring Node Manager for Address Management We now need to configure the node manager to manage IP addresses.  On each machine we edit the nodemanager.properties file and add the following entries: Interface=eth1 NetMask=255.255.255.0 UseMACBroadcast=true This tells node manager to use eth1 to allocate IP addresses and use a 24-bit network mask.  UseMACBroadcast causes the Node Manager to use arping to announce the new IP address binding. (EDG and EDG) Note that we moved the node manager configuration to a non-shared location when we first set it up so the nodemanager.properties file should be found in /u01/oracle/admin/NodeManager. We previously configured the machines to grant appropriate privileges to the oracle user to run the ifconfig and arping commands. These privileges are required by the wlsifconfig.sh script used by the node manager.  (EDG and EDG) Configure Server Migration Targets Using the WebLogic console to set up server migration.  First select the machines to be used for each cluster (BAM_Cluster and SOA_Cluster). Navigate to Environment->Clusters and for the BAM and SOA clusters: Select cluster (BAM_Cluster and SOA_Cluster) and choose migration tab. Move SOAHost1 and SOAHost2 from the Available field to the target field. Choose the leasing data source and save your changes. For BAM Server (BAM1) and all SOA Servers (SOA1, SOA2) then we need to configure their target migration servers by navigating to Environment->Servers in WebLogic console.  Then for each server do the following: Select the migration tab for the server and select the correct host to allow migration, for SOA1 this SOAHost2, for SOA2 it is SOAHost2 and for BAM1 it is SOAHost1.  Note that we allow the server to migrate to the other machine in the cluster. Select Automatic Server Migration Enabled then save and activate changes. Restart the Admin Server. This should enable your managed SOA servers and BAM server to failover between machines.  We don’t need to worry about failover of the WSM servers and other BAM servers. (EDG and EDG) Job Mostly Done We have now pretty much finished setting up our SOA and BAM cluster.  We still need to test our environment but otherwise we are good to go.  In a future blog entry I will extend the domain to support OSB. References Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite 11g Release 1 (11.1.1) Oracle® Fusion Middleware High Availability Guide 11g Release 1 (11.1.1)

Configuring a SOA 11g PS2 SOA Cluster – Part VI Automatic Server Migration In previous blog entries we built a SOA/BPM cluster with associated Web Services Manager and BAM clusters and got those...

Installing an 11g SOA Cluster – Part V BAM Configuration

Configuring a SOA 11g PS2 SOA Cluster – Part V BAM Configuration In the previous blog entry we set up the SOA and BPM components of our cluster.  In this blog entry we will add BAM to the mix as shown in the diagram above.  Note that we will configure BAM Server 1 to run on machine 2 because this BAM managed server uses more system resources. BAM is not a symmetrically scalable system.  It consists of a central BAM server (BAM 1) that contains the cache of all events that have occurred.  This cache is not currently replicated.  Hopefully in the future it will use a Coherence replicated cache to make this available to all nodes.  Because the cache is not replicated the BAM server is special.  Other BAM servers can help to take away web load from the main server, but all event updates must go to the central BAM server.  Because the central BAM server is a singleton we need to configure it to be able run on any machine so that we do not lose BAM capability in event of a machine failure.  The other BAM servers have no such requirement and do not require floating IP addresses. Extend the WebLogic Domain to Support BAM We will extend the domain to support BAM managed servers in a cluster. Enable BAM Managed Server 1 IP Address Because the BAM singleton server will later be configured to run on any machine through whole server migration we need to bind its IP addresses to some physical machine because we have not set up the automatic migration of IP addresses.  We do this by running the ifconfig and arping commands as shown below on the physical machine we first want to run the central BAM server on.  In the example below I use sub-adapter 7 to avoid sub-adapters 8 and 9 which we used for the admin and SOA servers.  10.0.3.101 is the IP addresse for bam1. sudo /sbin/ifconfig eth1:8 10.0.3.101 netmask 255.255.255.0 sudo /sbin/arping -q -U -c 3 -I eth1 10.0.3.101 Check that you can ping BAM server bam1 from all servers. (EDG) Run the Config Wizard to Extend the Domain. Because we have an existing domain we will extend it to support the SOA and BPM components. (EDG) Running Config Script and Selecting Templates From the Oracle common home $MW_HOME/oracle_common run the config wizard at common/bin/config.sh and select “Extend an existing WebLogic domain”.  Select the WebLogic domain which is in the aserver (/u01/app/oracle/admin/soa-domain/aserver/soa_domain) directory.  When requested make sure you check “Business Activity Monitoring” and then proceed. The following should be selected but grayed out: Basic WebLogic Server Domain Oracle BPM Suite Oracle SOA Suite Oracle Enterprise Manager Oracle WSM Policy Manager Oracle JRF This will add BAM to our domain. (EDG) Configuring RAC Data Sources We need to configure the JDBC component schema as a multi data source schemas.  Select the BAM Schema and set the following: Service Name : bamedg.soa.oracle.com (the service name we created in the database) Password: <password> Host Names: rac1-vip.soa.oracle.com and rac2-vip.soa.oracle.com Instance Names: RAC1 and RAC2 Port Numbers: 1521 If we selected a schema prefix of “DEV” then there is no need to set the schema names.  If you chose a different prefix then you need to set the schema. Creating Distributed Queues Because we are installing a cluster we need to customize the following sections: JMS Distributed Destinations – to set up distributed queues Managed Servers, Clusters, and Machines – to create our BAM managed servers and BAM cluster Deployments and Services – to target our deployments at the correct cluster In the Select JMS Distributed Destination Type screen we make sure that we are not using weighted distributed destinations but instead are using uniform distributed destinations (UDD) for the BAM JMS. Created BAM Managed Servers and Cluster In the Configure Managed Servers screen we change the name of new server to BAM1 from bam_server1 and add a second BAM server BAM2.  The SOA servers need to be listening on address bam1.soa.oracle.com or bam2.soa.oracle.com as appropriate.  This will restrict them to listening on the IP addresses that are configured to float between machines.  Both servers can be set to listen on port 9001.  Having all servers of the same kind listening on the same port makes it easier to see what is happening. Server Name BAM1 BAM2 Listen Address bam1.us.oracle.com bam2.us.oracle.com Listen Port 9001 9001 On the Configure Clusters screen we can add a new cluster called BAM_Cluster and assign the two BAM servers to that cluster on the Assign Servers to Clusters screen. Assign to Servers to Machines On the Configure Machines screen delete the LocalMachine that has been created as we previously created the machines we need in the Unix tab when original creating the domain. We then assign the new servers to physical machines SOAHost1 and SOAHost2 on the Assign Servers to Machines screen.  Note that we want to assign BAM1 to SOAHost2 and BAM2 to SOAHost1.  Of course it would be better to have more machines so that BAM is not sharing the server with SOA. Targeting Deployments, Services and Resources to BAM Cluster Finally we need to make the following changes to the deployments and services to make sure that they are correctly targeted. Application Targets WSM-PM WSM_Cluster DMS Application Admin Server, WSM_Cluster, SOA_Cluster, BAM_Cluster usermessagingserver SOA_Cluster, BAM_Cluster usermessagingdriver-email SOA_Cluster, BAM_Cluster Libraries Targets oracle.rules.* SOA_Cluster, BAM_Cluster oracle.sdp.* SOA_Cluster, BAM_Cluster oracle.soa.* SOA_Cluster oracle.wsm.seedpolicies WSM_Cluster, SOA_Cluster, BAM_Cluster oracle.bam BAM_Cluster Startup Classes Targets JOC-Startup WSM_Cluster OWSM Startup class WSM_Cluster Shutdown Classes Targets JOC-Shutdown WSM_Cluster JDBC Targets mds-owsm Admin Server, WSM_Cluster mds-owsm-rac0 Admin Server, WSM_Cluster mds-owsm-rac1 Admin Server, WSM_Cluster mds-soa Admin Server, SOA_Cluster mds-soa-rac0 Admin Server, SOA_Cluster mds-soa-rac1 Admin Server, SOA_Cluster oraSDPMDatasource SOA_Cluster, BAM_Cluster oraSDPMDatasource-rac0 SOA_Cluster, BAM_Cluster oraSDPMDatasource-rac1 SOA_Cluster, BAM_Cluster All other items are left as they are set up by the wizard. The domain has now been extended to support SOA and BPM. (EDG) Restart Admin Server To get our changes to the domain to take effect we need to restart the admin server.  We are then ready to make some further config changes. Configuring JMS for Server Migration We want to allow BAM Server 1 to migrate between machines so we need to set up its queues on shared storage. In the WebLogic console we go to Services->Persistence Store and select the UMSJMSFileStore_auto_3 and change the directory to be /u01/app/oracle/admin/soa-domain/soa-cluster/jms.  Repeat this for UMSJMSFileStore_auto_4.  This will cause the JMS servers in BAM1 and BAM2 to create their JMS queue files in that shared directory, allowing us to fail over BAM1 between nodes. (EDG) Configuring Transaction Failover When BAM Server 1 fails over we want it t be able to recover any in-flight transactions so we need to go into Environment->Servers->BAM1->Configuration->Services and set the Directory to be /u01/app/oracle/admin/soa-domain/soa-cluster/tlogs. (EDG) Making BAM Server System a Singleton To make the BAM Server System a singleton we need to untarget it from BAM2 managed server.  We do this in the WebLogic console by going to Deployments and choosing oracle-bam then the Targets tab.  Select the following and then choose Change Targets: /oracle/bam oracle-bam-adc-ejb.jar oracle-bam-ems-ejb.jar oracle-bam-eventengine-ejb.jar oracle-bam-reportcache-ejb.jar oracle-bam-statuslistener-ejb.jar sdpmessagingclient-ejb.jar Set the target for these components to be BAM Server 1 only. Although not strictly necessary it is a good idea to explicitly set BAMServer and BAMServerWS components to be targeted at BAM_Cluster. (EDG) Disable Hostname Verification We need to disable hostname verification for the BAM1 and BAM2 servers as we previously did for the AdminServer, WSM1, WSM2, SOA1 and SOA2 servers. (EDG) Transfer Changes to Managed Domains We need to make sure that the managed domains have the latest changes by using the pack command to bundle up a new domain template from the oracle_common/common/bin directory, as before we use the aserver shared directory to move the domain template: ./pack.sh -managed=true -domain=$ORACLE_BASE/admin/soa-domain/aserver/soa_domain -template=$ORACLE_BASE/admin/soa-domain/aserver/soadomaintemplateExtBAM.jar -template_name=soa_domain_templateExtBAMWe then run the unpack command on each host to unpack the propagated template to the domain directory of the managed server:./unpack.sh -domain=$ORACLE_BASE/admin/soa-domain/mserver/soa_domain -overwrite_domain=true -template=$ORACLE_BASE/admin/soa-domain/aserver/soadomaintemplateExtBAM.jar -app_dir=$ORACLE_BASE/admin/soa-domain/mserver/applicationsIf there are problems starting node manager or servers in the managed domain then you may need to delete the mserver/soa_domain directory and run unpack again. (EDG)Start BAM ServersI recommend restarting the node manager on the BAM machines and then using WebLogic console to start the BAM managed servers.  Before starting BAM1 make sure that you have enabled the IP address for that server.You can verify that BAM is ready for use by checking the following URLshttp://bam1.soa.oracle.com:9001/OracleBAM http://bam2.soa.oracle.com:9001/OracleBAMNote that if, like me, your SOA cluster runs on a private network segment accessible only through the load balancer, then you must run the browser in that network segment.  In my case I only have Linux machines on that network and so although I can use Firefox to access the Oracle BAM screens they tell me that only Internet Explorer is supported.  If you see that message then relax, things are working as they should! (EDG and EDG)Configuring BAM Web ApplicationsNow that we have working BAM servers we need to configure them for use in the cluster.  We will first configure the BAM web applications to be aware of which machine is hosting the BAM server.Perform the following for each BAM server (BAM1 and BAM2)Expand BAM in the EM Console tree view. Right click OracleBAMWeb(BAM1 or BAM2) and choose BAM Web Properties Set the Application URL to be the front end address – http://soa-cluster.soa.oracle.com Set the Server Name to be the name of the BAM Server running the Active Data Cache – BAM1.  This allows the BAM Web Applications to find the BAM Server. Click ApplyRemember that the Server Name should be BAM1 for both servers. (EDG)Configuring BAM ADC ServerWe now need to configure the BAM Server to use the correct address.Right click OracleBAMServer(BAM1) and choose System MBean Browser Navigate using the tree view to Application Defined MBeans->oracle.bam.server->Server: BAM1->Application: oracle-bam->Config->BAMServerConfig Set the ADCServerName attribute to be BAM1 (this identifies the BAM managed server running the Active Data Cache, also referred to as the BAM Server) Set the ADCServerPort to be 9001 Click ApplyWith the BAM Web Apps and BAM Server configured we now need to restart the BAM managed servers. (EDG)Configuring OHS for BAMWe can now configure OHS to allow access to BAM by adding the following to the $ORACLE_BASE/admin/OHSn/config/OHS/ohsN/httpd.conf file after the SOA Composer or BPM Entries:# SOA composer application    MatchExpression /soa/composer WebLogicCluster=soa1:8001,soa2:8001 # BPM   MatchExpression /bpm/composer WebLogicCluster=soa1:8001,soa2:8001 # BPM   MatchExpression /bpm/workspace WebLogicCluster=soa1:8001,soa2:8001 # BAM  MatchExpression /OracleBAM WebLogicCluster=bam1:9001,bam2:9001   MatchExpression /OracleBAMWS WebLogicCluster=bam1:9001,bam2:9001We can then restart the web servers by issuing the command/u01/app/oracle/admin/OHSn/bin/opmnctl restartproc ias-component=ohsNWe can now access the BAM servers through the load balancer using Internet Explorer where we should be presented with a login screen. (EDG)ConclusionWe have now created our SOA cluster and configured it run Web Services Manager (WSM_Cluster), SOA and BPM (SOA_Cluster) and BAM (BAM_Cluster).  In the next blog entry we will enable automatic failover of servers between machines and tidy up any other loose ends, but we now have a working cluster that we can use.

Configuring a SOA 11g PS2 SOA Cluster – Part V BAM Configuration In the previous blog entry we set up the SOA and BPM components of our cluster.  In this blog entry we will add BAM to the mix as shown...

Installing an 11g SOA Cluster – Part IV More Configuration

Configuring a SOA 11g PS2 SOA Cluster – Part IV SOA Configuration In this post we will continue to set up our SOA cluster.  Previously I covered setting up the environment with a Web Services Manager Policy Manager Cluster.  We will now extend the domain created there to include SOA and BPM components as shown in the diagram above. Extend the WebLogic Domain to Support SOA We will extend the domain to support SOA/BPM managed servers in a cluster. (EDG) Enable SOA Managed Server IP Addresses Because the SOA managed servers will later be configured to run on any machine through whole server migration we need to bind their IP addresses to some physical machines because we have not set up the automatic migration of IP addresses.  We do this by running the ifconfig and arping commands as shown below for each SOA server we will create on the physical machine we first want to run it on.  In the example below I use sub-adapter 8 on both machines for consistency and to avoid sub-adapter 9 which we used for the admin server.  10.0.3.111 and 10.0.3.112 are the IP addresses for soa1 and soa2. sudo /sbin/ifconfig eth1:8 10.0.3.111 netmask 255.255.255.0 sudo /sbin/arping -q -U -c 3 -I eth1 10.0.3.111 Check that you can ping SOA server soa1 from the server 2 and SOA server soa2 from server 1. (EDG) Run the Config Wizard to Extend the Domain. Because we have an existing domain we will extend it to support the SOA and BPM components. Running Config Script and Selecting Templates From the Oracle common home $MW_HOME/oracle_common run the config wizard at common/bin/config.sh. and select “Extend an existing WebLogic domain”.  Select the WebLogic domain which is in the aserver (/u01/app/oracle/admin/soa-domain/aserver/soa_domain) directory.  When requested make sure you have the following checked then proceed: Oracle BPM Suite Oracle SOA Suite If you do not need or are not licensed for BPM Suite then just select SOA Suite. The following should be selected but grayed out: Basic WebLogic Server Domain Oracle Enterprise Manager Oracle WSM Policy Manager Oracle JRF Configuring RAC Data Sources We need to configure all the JDBC component schema as multi data source schemas.  Select the SOA Infrastructure, User Messaging Service and SOA MDS Schema and set the following: Service Name : soaedg.soa.oracle.com (the service name we created in the database) Password: <password> (assuming you set all schemas to the same password) Host Names: rac1-vip.soa.oracle.com and rac2-vip.soa.oracle.com Instance Names: RAC1 and RAC2 Port Numbers: 1521 If we selected a schema prefix of “DEV” then there is no need to set the individual schema names.  If you chose a different prefix then you need to select each schema individually and change the username. This will set the schemas to use the RAC database and fail over between nodes if necessary. Creating Distributed Queues Because we are installing a cluster we need to customize the following sections: JMS Distributed Destinations – to set up distributed queues Managed Servers, Clusters, and Machines – to create our SOA managed servers and SOA cluster Deployments and Services – to target our deployments at the correct cluster In the Select JMS Distributed Destination Type screen we make sure that we are not using weighted distributed destinations but instead are using uniform distributed destinations (UDD) for all the JMS resources. Created SOA/BPM Managed Servers and Cluster SOA & BPM run in the same managed servers so In the Configure Managed Servers screen we change the name of new server to SOA1 from soa_server1 and add a second SOA server SOA2.  The SOA servers need to be listening on address soa1.soa.oracle.com or soa2.soa.oracle.com as appropriate.  This will restrict them to listening on the IP addresses that are configured to float between machines.  Both servers can be set to listen on port 8001.  Having all servers of the same kind listening on the same port makes it easier to see what is happening. Server Name Listen Address Listen Port SOA1 soa1.soa.oracle.com 8001 SOA2 soa2.soa.oracle.com 8001 On the Configure Clusters screen we can add a new cluster called SOA_Cluster and assign the two SOA servers to that cluster on the Assign Servers to Clusters screen. Assign to Servers to Machines On the Configure Machines screen delete the LocalMachine that has been created as we previously created the machines we need in the Unix tab when original creating the domain. We then assign the new servers to physical machines SOAHost1 and SOAHost2 on the Assign Servers to Machines screen. Targeting Deployments, Services and Resources to SOA/BPM Cluster Finally we need to make the following changes to the deployments and services to make sure that they are correctly targeted. Application Targets WSM-PM WSM_Cluster Libraries Targets oracle.rules.* SOA_Cluster oracle.sdp.* SOA_Cluster oracle.bpm.* SOA_Cluster oracle.soa.* SOA_Cluster Startup Classes Targets JOC-Startup WSM_Cluster JOC-Shutdown WSM_Cluster OWSM Startup class WSM_Cluster JDBC Targets mds-owsm WSM_Cluster, AdminServer mds-owsm-rac0 WSM_Cluster, AdminServer mds-owsm-rac1 WSM_Cluster, AdminServer All other items are left as they are set up by the wizard. The domain has now been extended to support SOA and BPM. (EDG) Restart Admin Server To get our changes to the domain to take effect we need to restart the admin server.  We are then ready to make some further config changes. Configure Coherence for SOA Cluster Coherence is Oracles data grid software and the higher levels of the Oracle stack are moving away from JGroups and Java Object Cache to use Coherence for both cluster membership decisions and distributed object caching.  A Coherence cluster has no master node, any machine can be the master.  There are two ways to locate a Coherence cluster, through a broadcast request (the default), or through well known addresses.  The table below compares the two approaches: Feature Multi-Cast Uni-Cast Configuration Same for all machines. Different setting for Coherence cluster localhost property for each machine. Discovery Mechanism Multicast IP packets. Unicast IP packets to list of servers specified in config (well known addresses or wka). Startup Order No order required. At least one of the servers in the wka list must be the first to start Impact of Adding Servers No Impact Ideally should update config of all servers to add a new wka to their lists. Impact of Routers Between Servers Unless configured will drop multi-cast packets. No impact. Impact of Multiple Coherence Clusters on Network Need to configure each cluster with a unique multicast address. As long as wka’s are separate for each cluster then no impact. When using well known addresses Coherence will go through the list of WKAs in an attempt to find an existing cluster.  If it does not find a server then it will start a cluster only if its localhost setting is the same as one of the wka’s, otherwise it will give up on the cluster!  When using multi-cast then Coherence will broadcast a message saying in effect, is there a cluster out there, and if it gets a response then it will join that cluster, otherwise it will create a cluster.  Once a cluster is joined then the two models behave in the same way. Because of the problems with putting multi-cast messages through routers the EDG recommends using well known addresses.  The impact of adding additional servers can be mitigated by creating hostnames for additional SOA servers ahead of time and adding them to the list of wka’s in each server.  If a server is unable to find any well known addresses and it is not configured to listen on one of the well known addresses then the server will not start. (Coherence Wiki) If you decide to use multi-cast because you are not going through a router then I recommend that you configure a unique cluster address different from the default cluster of “227.7.7.9:9778” by changing the coherence.clusteraddress and coherence.clusterport settings.  Valid values for the cluster address are 224.0.0.0 to 239.255.255.255 and for the cluster port are 1 to 65535.  (Coherence Wiki) To change the multi-case settings we change the EXTRA_JAVA_PROPERTIES in setDomainEnv.sh. EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES}   -Dsoa.archives.dir=${SOA_ORACLE_HOME}/soa -Dsoa.oracle.home=${SOA_ORACLE_HOME}   -Dsoa.instance.home=${DOMAIN_HOME} -Dtangosol.coherence.clusteraddress=225.99.88.77   -Dtangosol.coherence.clusterport=2010 -Dtangosol.coherence.log=jdk   -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl   -Djava.protocol.handler.pkgs=${PROTOCOL_HANDLERS}   -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true   -Djavax.net.ssl.trustStore=${WL_HOME}/server/lib/DemoTrust.jks" export EXTRA_JAVA_PROPERTIES Note if you change the multcast settings it needs to be done for each set of domain files, see note later on using pack and unpack. To apply the Coherence WKA configuration we need to edit the startup command arguments for each SOA managed server in the WebLogic console (managed server->Configuration->ServerStart tab Arguments field)and setting it as follows: Managed Server WKA Config SOA1 -Dtangosol.coherence.wka1=soa1 –Dtangosol.coherence.wka2=soa2 –Dtangosol.coherence.wka3=soa3 –Dtangosol.coherence.localhost=soa1 SOA2 -Dtangosol.coherence.wka1=soa1 –Dtangosol.coherence.wka2=soa2 –Dtangosol.coherence.wka3=soa3 –Dtangosol.coherence.localhost=soa2 Note that the multicast config is simpler but will probably not work across routers.  Also notice that I have added a dummy third server to the list of WKAs so that if I decide to add a third server to my cluster I don’t need to do anything to the existing two servers and can start the servers in any sequence.  Finally make sure that there are no newlines in the Arguments box! (EDG) Configure B2B Queues Because of the way B2B uses queues and topics it is necessary to create specific destination identifiers as outlined below: Queue Create Destination Identifier dist_B2BEventQueue_auto jms/b2b/B2BEventQueue B2B_IN_QUEUE jms/b2b/B2B_IN_QUEUE B2B_OUT_QUEUE jms/b2b/B2B_OUT_QUEUE B2BBroadcastTopic jms/b2b/B2BBroadcastTopic XmlSchemaChangeNotificationTopic jms/fabric/XmlSchemaChangeNotificationTopic   Disable Hostname Verification We need to disable hostname verification for the SOA1 and SOA2 servers as we previously did for the AdminServer and the WSM1 and WSM2 servers. Transfer Changes to Managed Domains We need to make sure that the managed domains have the latest changes by using the pack command to bundle up a new domain template from the oracle_common/common/bin directory, as before we use the aserver shared directory to move the domain template: ./pack.sh -managed=true -domain=$ORACLE_BASE/admin/soa-domain/aserver/soa_domain -template=$ORACLE_BASE/admin/soa-domain/aserver/soadomaintemplateExtSOABPM.jar -template_name=soa_domain_templateExtSOABPMWe then run the unpack command on each host to unpack the propagated template to the domain directory of the managed server:./unpack.sh -domain=$ORACLE_BASE/admin/soa-domain/mserver/soa_domain -overwrite_domain=true -template=$ORACLE_BASE/admin/soa-domain/aserver/soadomaintemplateExtSOABPM.jar -app_dir=$ORACLE_BASE/admin/soa-domain/mserver/applicationsIf there are problems starting servers in the managed domain then you may need to delete the mserver/soa_domain directory and run unpack again. (EDG and EDG)Important: If You Changed setDomainEnv.shIf you changed the Coherence clusteraddress and clusterport in setDomainEnv.sh then you need to make the same changes on every machines managed domain where you ran unpack as changes to setDomainEnv.sh are not propagated through the pack/unpack command.Unpacking XEngine FilesFor each additional copy of the SOA binaries that you have (one on each machine if you are not using shared binaries) you will need to unpack the B2B XEngine files.  These files are unpacked for you on the machine that you ran the domain configuration wizard.cd /u01/app/oracle/product/fmw/Oracle_SOA/soa/thirdparty/edifecstar -xzvf XEngine.tar.gzNote that if you have shared binaries then you only need to do this for the shared binaries that were not mounted on the machine that ran the domain configuration wizard. (EDG)Testing the InstallAt this point I would recommend shutting down all servers and node managers and then bringing up the node managers, the Admin Server and then using the admin server to start the WSM_Cluster servers followed by the SOA_Cluster servers.  If there are problems in starting then check that you have correctly targeted all the SOA components and that the Coherence cluster in SOA managed servers does not show any errors.You can then validate that you can access the following URLsWeb Services Manager Policy Manager http://wsm1:7010/wsm-pm, http://wsm1:7010/wsm-pm Click Validate Policy Manager. A list of policies and assertion templates available in the data store appear. SOA infrastructure http://soa1:8001/soa-infra, http://soa2:8001/soa-infra SOA Process Composer http://soa1:8001/soa/composer, http://soa2:8001/soa/composer Worklist Application http://soa1:8001/integration/worklistapp, http://soa2:8001/integration/worklistapp B2B http://soa1:8001/b2bconsole, http://soa1:8001/b2bconsole Messaging System Preferences http://soa1:8001/sdpmessaging/userprefs-ui, http://soa2:8001/sdpmessaging/userprefs-ui BPM Composer http://soa1:8001/bpm/composer, http://soa2:8001/bpm/composer BPM Workspace http://soa1:8001/bpm/workspace, http://soa2:8001/bpm/workspaceNote weblogic/<password> will let you login to all these URLs.  If you can get to all these components then you are well on your way to a working SOA cluster.  Note the EDG WSM URL is wrong.  (EDG and EDG)Configuring and Validating HTTP Servers for SOAI amended my httpd.conf configuration as shown below to add routing to the SOA and BPM Suite.  Obviously if you did not deploy BPM Suite you do not need the bpm mappings.<IfModule mod_weblogic.c># WSM-PM   MatchExpression /wsm-pm WebLogicCluster=wsm1:7010,wsm2:7010 # SOA soa-infra app   MatchExpression /soa-infra WebLogicCluster=soa1:8001,soa2:8001 # Worklist   MatchExpression /integration WebLogicCluster=soa1:8001,soa2:8001 # B2B   MatchExpression /b2bconsole WebLogicCluster=soa1:8001,soa2:8001 # UMS prefs   MatchExpression /sdpmessaging/userprefs-ui WebLogicCluster=soa1:8001,soa2:8001 # Default to-do taskflow   MatchExpression /DefaultToDoTaskFlow WebLogicCluster=soa1:8001,soa2:8001 # Workflow   MatchExpression /workflow WebLogicCluster=soa1:8001,soa2:8001 #Required if attachments are added for workflow tasks    MatchExpression /ADFAttachmentHelper WebLogicCluster=soa1:8001,soa2:8001 # SOA composer application    MatchExpression /soa/composer WebLogicCluster=soa1:8001,soa2:8001 # BPM   MatchExpression /bpm/composer WebLogicCluster=soa1:8001,soa2:8001 # BPM   MatchExpression /bpm/workspace WebLogicCluster=soa1:8001,soa2:8001 </IfModule>Note I used MatchExpression rather than Location because I had problems with location when using virtual hosts.  In my configuration I use virtual hosts to restrict access to the WebLogic console and EM.  These changes need to be made to all OHS instances in /u01/app/oracle/admin/OHSN/config/OHS/ohsN/httpd.conf.   We then use opmnctl restartproc to restart the HTTP servers./u01/app/oracle/admin/OHS1/bin/opmnctl restartproc ias-component=ohs1 /u01/app/oracle/admin/OHS2/bin/opmnctl restartproc ias-component=ohs2We should now have access to our SOA services through the HTTP server and load balancer.  (EDG)We can then check the following URLs to ensure that the web servers and load balancer are workingWSM Policy Manager http://web1:7777/wsm-pm, http://web22:7777/wsm-pm, https://soa-cluster.soa.oracle.com/wsm-pm SOA Infrastructure http://web1:7777/soa-infra, http://web2:7777/soa-infra, https://soa-cluster.soa.oracle.com/soa-infra SOA Composer http://web1:7777/soa/composer, http://web2:7777/soa/composer, https://soa-cluster.soa.oracle.com/soa/composer Worklist Application http://web1:7777/integration/worklistapp, http://web2:7777/integration/worklistapp, https://soa-cluster.soa.oracle.com/integration/worklistapp Messaging System Preferences http://web1:7777/sdpmessaging/userprefs-ui, http://web2:7777/sdpmessaging/userprefs-ui, https://soa-cluster.soa.oracle.com/sdpmessaging/userprefs-ui B2B http://web1:7777/b2bconsole, http://web2:7777/b2bconsole, https://soa-cluster.soa.oracle.com/b2bconsole BPM Composer http://web1:7777/bpm/composer, http://web2:7777/bpm/composer, https://soa-cluster.soa.oracle.com/bpm/composer BPM Workspace http://web1:7777/bpm/workspace, http://web2:7777/bpm/workspace, https://soa-cluster.soa.oracle.com/bpm/workspaceIf all the above URLs work then our web servers and load balancer are configured correctly. (EDG)Setting SOA/BPM Frontend Host & Cluster AddressWe need to make the SOA cluster aware of how it is being accessed.  We do this by selecting the SOA cluster from the Clusters summary in WebLogic console and selecting the HTTP tab and setting the frontend host to be soa-cluster.soa.oracle.com and frontend HTTPS port to be 443.This also effectively sets the callback URL for SOA so that it will correctly generate callback addresses for the cluster rather than the individual node that is generating the reference. (EDG)To take advantage of the cluster for direct binding (RMI rather than SOAP over HTTP) then we need to set the cluster address to the list of machines in the cluster.  This is set from the SOA cluster General tab in WebLogic console.  We need to set the Cluster Address field to be a comma separated list of the managed servers in our SOA/BPM cluster.soa1:8001, soa2:8001This will enable the SOA Suite to take advantage of the cluster when using optimized message calls. (EDG)Important Note When Using Self-Certified CertificatesIf you are using SSL for your front-end host (as recommended by the EDG) and are not using publicly certified certificates then you need to import the certificate from your load balancer into your Java cacerts file otherwise you will not be able to test your processes and some components that make loopback calls may fail because they cannot verify the host.  To import the certificate first export it from the load balancer using your browser then take the saved file and add it using keytool<JAVA_HOME>/bin/keytool –importcert –file DownLoadedCert                                                 –keystore <JAVA_HOME>/jre/lib/security/cacerts –alias soa-clusterThis adds the certificate as a trusted certificate to the trust store and identifies it with the name soa-cluster.Setting Message Stores for FailoverThe SOA infrastructure uses JMS queues and because these are set up as uniform distributed queues they need to be made available on shared storage for server failover.  Distributed queues allow part of each queue to be managed by individual managed servers.  If a managed server fails then any messages in its portion of the queue will get stuck until the server is restarted.  So that we can restart the server on a different node, to cope with hardware failure, we need to put the associated queue files onto shared storage.We change all the stores from the WebLogic Console by going to the Services->Persisence Stores page and then for each store we go in and selec the configuration tab and change the Directory field to be /u01/app/oracle/admin/soa-domain/soa-cluster/jms, which is the shared directory we set up.  Each managed server will then create a unique file in that directory to store its messages. (EDG)Setting a Shared Transaction StoreThe SOA Infrastructure makes use of XA transactions and so uses the WebLogic transaction coordinator.  In the event of a managed server failing the in-flight transactions are persisted in a file.  This file must be accessible from all nodes so that in event of machine failure a managed server can be restarted and still be able to find its in-flight transactions, it can then either roll them forward or back as appropriate.  Leaving in-flight transactions active can cause problems for resource managers such as a database, which may maintain locks.To place the transaction logs in a shared location for each of the SOA managed servers go to the Configuration->Services tab and set the Default Store to be a shared location such /u01/app/oracle/admin/soa-domain/soa-cluster/tlogs which is in our shared cluster directory.  Each managed server will then store its transaction logs here and will be able to find them in the event of being started on a different node. (EDG)Configuring File & FTP Adapters for HAThe file and FTP adapters are not dealing with transactional resources and so to avoid race conditions the adapters can be configured to communicate through locks in the database and through a shared file structure.  The FTP config for HA is basically the same as file adapter config, except you will do additional configuration of the FTP adapter for each specific FTP server.  To configure the adapters to use the DB for HA coordination go to the FileAdapter or FTPAdapter under Deployments and choose the Configuration->Outbound Pools tab.  Expand the ConnectoinFactory entry and for the FileAdapter select the eis/HAFileAdapter outbound connection properties.  The data sources are already set up to point to the SOA repository.  You just need to change the controlDir to point to a shared location available to all servers, such as /u01/app/oracle/admin/soa-domain/soa-cluster/fadapter.After saving your changes you will prompted for a plan location.  The plan needs to be available to all managed servers and so set the plan location to be /u01/app/oracle/admin/soa-domain/soa-cluster/dd/fadapter/Plan.xml. (EDG)Important NoteAnytime that you create a plan file you need to store it in shared storage that is accessible from all nodes because it does not get propagated with the rest of the WebLogic configuration.SummaryWe have now done basic configuration of our SOA cluster for HA, but we have not yet enabled it for automatic server failover between nodes and we have not yet added the BAM components to our cluster.  We will perform these activities in the next couple of blog entries.

Configuring a SOA 11g PS2 SOA Cluster – Part IV SOA Configuration In this post we will continue to set up our SOA cluster.  Previously I covered setting up the environment with a Web Services Manager...

SOA Suite

Installing an 11g SOA Cluster – Part III Configuration

Configuring a SOA 11g PS2 SOA Cluster – Part III WSM Configuration In this post I will go through the steps required to set up a SOA cluster.  In previous posts I covered how to install RAC database for use by SOA, how to prepare the environment for SOA and how to install the software required.  The diagram shows the configuration I am creating, note that the Enterprise Deployment Guide (EDG) expects the web servers (OHS Servers) to be on a separate machine to the WebLogic servers (Admin and WSM servers). Configuration Approach Because configuring the SOA cluster in a high available environment has a lot of activities we will follow the Enterprise Deployment Guide (EDG) practice of configuring the environment a piece at a time and then validating what we have done.  We will do the following: Create a WebLogic domain and configure it for Web Services Manager by creating a policy management cluster. Create Oracle HTTP Server instances and configure them to talk to the newly created SOA domain. Extend the WebLogic domain to support SOA & BPM functionality. Extend the WebLogic domain to support BAM functionality. Configure the node manager to support starting and failing over all managed servers. Create the WebLogic Domain Our first step is to create a new WebLogic domain and within it configure a Web Service Manager policy management cluster.  This will require us to create a domain with an Admin Server and managed servers for the OWSM policy management cluster. Enabling Access to ifconfig and arping Commands To start and stop the network adapter for the admin server we need root privileges.  We can provide these just for the ifconfig and arping commands by using sudo which allows to run certain commands with root privileges.  To set this up on each machine run the visudo command as root to edit the /etc/sudoers permission file and add the following: ## SOA Suite Settings oracle ALL=NOPASSWD: /sbin/ifconfig, /sbin/arping This grants the oracle user access to the ifconfig and arping commands without having to provide a password by using sudo. Obviously this needs to be done on all middleware servers.  (EDG) Enable Admin IP Address The Admin server needs to be able to be started on any of the SOA servers so we have assigned it an IP address that can float between machines.  Before we configure our cluster we need to assign the Admin server IP address to a machine in the cluster.  For my environment I issued the following commands as the root user: sudo /sbin/ifconfig eth1:9 10.0.3.100 netmask 255.255.255.0 sudo /sbin/arping -q -U -c 3 -I eth1 10.0.3.100 The ifconfig command associates a sub-interface (eth1:9 in my case) with a particular IP address and associated netmask.  I used eth1:9 to avoid any possible future clashes with node manager which will be responsible for assigning floating IP addresses for managed servers that can migrate between machines.  The arping command sends an unsolicited update (-U) 3 time (-c 3) on interface eth1 (-I eth1) alerting nodes on the sub-net of the association between this IP address and the adapter, this is important if the IP address was previously assigned to another adapter, such as occurs during migration of the admin server from one machine to another.  It is a good idea to create this as a shell script on shared storage, perhaps in the AServer folder, because you will need it to start the admin server on different machines. (EDG) Creating the Domain Having allocated an IP address for the admin server we now need to configure the initial domain. (EDG) Running Config Script and Selecting Templates edg To create the domain we run the config script found in the oracle_common/common/bin directory.  This starts the Fusion Middleware Configuration Wizard which allows us to “Create a New WebLogic Domain”.  We select the “Oracle WSM Policy Manager” template which should also automatically select the Oracle JRF template.  The WSM Policy Manager template as you might expect configures the domain to support the policy manager.  This in turn relies upon the Java Required Files being configured in the domain.  The Java Required Files act as an abstraction layer isolating the higher level Oracle components from the underlying application server.  We also need to select the “Enterprise Manager” template to provide the admin support for SOA Suite. (EDG) Specifying Domain Location & JVM On the “Specify Domain Name and Location” screen make sure that we set the domain location to be /u01/app/oracle/admin/soa-domain/aserver and the domain name to soa_domain.  The domain name is of course arbitrary.  After providing a password for the weblogic user we then select the JDK and WebLogic domain startup mode.  Set the startup mode to be “Production Mode” and select the JRockit JDK.  If you followed my preparation instructions for a 64-bit JVM then this should be your only option.  Note that you can change the VM used later, but it is easier to set it here. Configuring RAC Data Sources On the “Configure JDBC Component Schema” screen select the OWSM MDS Schema and then select the “Configure selected component schemas as RAC multi data source schemas in the next panel”.  This allows us to set up OWSM to use a RAC database.  We can then enter the Service Name as soaedg.soa.oracle.com (this was configured in the previous blog entry), the username can be left as DEV_MDS if we used DEV as our prefix, provide the password of the database user and then enter the host details.  The host details should be the virtual listener addresses that we set up, see my earlier post on configuring a RAC database for SOA.  In my case host names are rac1-vip.soa.oracle.com and rac2-vip.soa.oracle.com with instance names of RAC1 and RAC2 and port number 1521.  Before advancing to the next screen make sure that all nodes of the RAC database are up.  The WebLogic server makes use of multiple data source pools to connect to a RAC database, each pool targeting a specific RAC instance.  It is important to verify that all nodes are correctly working so that in event of database node failure the remaining nodes can continue to service database requests. Configuring the Admin Server After the database configuration we are asked about the optional configuration we wish to perform, select “Administration Server”, “Managed Servers, Clusters and Machines” and “Deployment and Services”.  This will allow us to create tie the admin server to the correct address, create the managed servers for the OWSM policy management cluster and target the right services to the cluster. On the “Configure the Administration Server” screen we want to change the listen address to be the hostname we created for the admin server, in my case admin.soa.oracle.com.  This makes the Admin Server listen only on this address.  This stops the admin server from binding to other managed server addresses and also makes sure that it is actually listening on the hostname that we have assigned for the Admin Server.  Unless there is likely to a port clash with another WebLogic server domain running on the same machine then leave the listening port to be 7001. Creating the OWSM Managed Servers and Cluster I added the following managed servers which I will use to run the OWSM policy manager. Server Name Listen Address Listen Port WSM1 wsm1.soa.oracle.com 7010 WSM2 wsm2.soa.oracle.com 7010 Because WSM1 and WSM2 are running on different fixed IP addresses they can listen on the same port number.  If like me you used the same IP address for multiple fixed IP managed servers (WSM2 and BAM2 for example) it is important to make sure that you assign different port number to the managed servers that run different roles. Note that the EDG has the managed servers listen on the SOA hostname address while I have them listen on WSM hostname address.  In my configuration that is the same as the SOA hostname address but by having this additional hostname I can just change the IP address of a WSM hostname and retarget the server to another physical machine without having to make any other configuration changes, either in the WSM managed server configuration or in the front-end web server configuration. After creating the servers I created a cluster called WSM_Cluster, leaving the cluster settings at their defaults, and then assigned the two WSM managed servers to the WSM cluster. Creating “Physical” Machines and Targeting Servers We are currently creating a cluster with two physical machines, so I need to create a WebLogic representation of these machines.  The machines correspond to a node manager, and later we will configure a node manager to run on each machine.  Because I was running on Linux I chose to create “Unix Machine”s.  I added my two machines calling them SOAHost1 and SOAHost2 and identifying their node manager listen addresses as soa-cluster1.soa.oracle.com and soa-cluster2.soa.oracle.com.  I left the other settings at their default values.  I also added an extra machine called AdminHost that listens on localhost.  This allows us to move the Admin Server between machines without having to retarget it. Having created the machines I then assigned the servers to machines as follows Server Role Target Machine AdminServer WebLogic Admin Server AdminHost WSM1 WSM Policy Manager Server SOAHost1 WSM2 WSM Policy Manager Server SOAHost2 Targeting Deployments, Services and Resource to WSM Cluster Make sure that the following items are correctly targeted to the appropriate server/cluster. Application Deployment Target wsm-pm WSM_Cluster Library Deployment Target oracle.wsm.seedpolicies#11.1.1@11.1.1 WSM_Cluster Shutdown Class Target JOC-Shutdown WSM_Cluster Startup Class Target JOC-Startup WSM_Cluster OWSM Startup Class WSM_Cluster JDBC Target All JDBC System Resource WSM_Cluster, AdminServer All other items are targeted only at the Admin Server. Create the Domain We can now finally hit the create button to create the domain.  Note that this will create the soa_domain on shared storage with domain configuration in /u01/app/oracle/admin/soa_domain/aserver/soa_domain and applications stored in /u01/app/oracle/admin/soa_domain/aserver/applications/soa_domain Starting & Configuring the Domain Configure Admin Server Security Before starting the admin server we need to configure the security properties file for the Admin server by creating /u01/app/oracle/admin/aserver/domains/soa_domain/servers/AdminServer/security/boot.properties with the following contents username=weblogic password=<password> This prevents us being prompted for a username and password when booting the admin server. (EDG) Configure and Start Node Manager Set Node Manager to Use Scripts Before using the Node Manager we must set it to use the start scripts by running setNMProps.sh from the Oracle Common home /u01/app/oracle/product/fmw/oracle_common/common/bin/setNMProps.sh.  This forces the node manager to use start scripts rather than directly launching the JVM by setting the “StartScriptEnabled” property equal to “true” in nodemanager.properties found in the WebLogic home common/bin directory /u01/app/oracle/product/fmw/wlserver_10.3/common/nodemanager.  This is important because the start scripts set up the (extensive) environment required by the SOA Suite.  Failure to run this may lead to ClassNotFoundErrors when starting WebLogic.  Beware that the setNMProps.sh script is very naive in the way it runs.  It checks for the existence of the string StartScriptEnabled=true in the nodemanager.properties file.  If you comment out this line it will still think that it is set and do nothing. Important:  This command must be run in every middleware home, for example if you are using shared storage for FMW binaries and have two copies being shared you need to run this command in both copies.  If you have multiple servers pointing to the same FMW binaries you only need to run this script once for each set of binaries.  (EDG) Create Node Manager Directory If you are using a shared directory for FMW software then you should create a directory outside of that to hold the node manager configuration and log files.  This needs to be local to each machine so a good location would be $ORACLE_BASE/admin/NodeManager.  Having created the directory we need to copy the contents of the $WLS_HOME/common/nodemanager to that directory. cp /u01/app/oracle/product/fmw/wlserver_10.3/common/nodemanager/* /u01/app/oracle/admin/NodeManager Having created a new directory we need to tell the node manager to use it by editing the $WLS_HOME/server/bin/startNodeManager.sh script and editing the line that sets the node manager home to point to the new home as shown below: NODEMGR_HOME="${ORACLE_BASE}/admin/NodeManager" Start the Node Manager Start the node manager by running the following command from the WebLogic server/bin directory /u01/app/oracle/product/fmw/wlserver_10.3/server/bin/startNodeManager.sh `hostname` This will restrict the node manager to listening on a single IP address rather than multiple IP addresses.  I suggest you put this into a script to make life easier. (EDG) Start the Admin Server via Script We can then start the Admin Server by running startWebLogic.sh.  This may take a while to come to the running state. With Admin Server running we can go to the console http://admin.soa.oracle.com:7001/console and set the node manager username and password by navigating to the security tab under the domain (soa_domain) and selecting the General tab.  Under the advanced settings set the node manager username and node manager password to any values you want, I used username nmadmin.  Once this is set then we can start the Admin Server using node manager. Shutdown the Admin Server either by killing the JVM directly or using the console to shutdown the server. (EDG) Start the Admin Server via Node Manager Using WLST we can start the Admin Server with the following script nmConnect(username, password, ‘soa-cluster1’, ‘5556’, ‘soa_domain’, ‘/u01/app/oracle/admin/soa_domain/’) nmStart('AdminServer') exit() We run this script from the WebLogic Scripting Tool wlst.sh in Oracle Common Home /u01/app/oracle/product/fmw/oracle_common/common/bin. (EDG) With the admin server up and running we can check that it is working and the configuration is correct by going to the weblogic console at http://admin:7001/console and the EM console at http://admin:7001/em. (EDG) Creating Separate Domain Directories for Managed Servers We want to create a separate domain directory for the managed servers to isolate them from the Admin Server.  These directories will be local to the node on which a managed server is running.  This approach avoids interference between nodes. ./pack.sh -managed=true     -domain=/u01/app/oracle/admin/soa-domain/aserver/soa_domain     -template=/u01/app/oracle/admin/soa-domain/aserver/soadomaintemplate.jar     -template_name=soa_domain_template This command packages up our domain as a template for use in a managed server domain and stores it in the soadomaintemplate.jar file.  This file can be transported to other nodes to provide access to the managed server domain on those nodes.  This process is simplified by putting the generated template jar file on a shared location as I have done above. ./unpack.sh -domain=/u01/app/oracle/admin/soa-domain/mserver/soa_domain     -template=/u01/app/oracle/admin/soa-domain/aserver/soadomaintemplate.jar     -app_dir=/u01/app/oracle/admin/soa-domain/mserver/applications This command unpacks the SOA domain for use by the managed servers and registers the local domain with the node manger by updating the $WLS_HOME/common/nodemanager/nodemanager.domains file.Note that we keep a separate managed server domain for each node, so we must run unpack on each node to create the local files even though the nodemanager.domains file already knows about the domain if we are using shared storage for the middleware binaries.  Note that if we have moved the node manager directory then we will need to copy the nodemanager.domains file to the new node manager directory.  (EDG) Apply JRF to WSM Cluster We need to apply the Java Required Files (the abstraction layer between SOA Suite and underlying app server) to the WSM cluster.  This is done from the Fusion Middleware Control page, selecting the cluster under the WebLogic domain tree and clicking the Apply JRF Template button. (EDG) Disable Hostname Verification on Servers If you are not going to set up certificates to enable certificate based hostname verification of your servers, to validate that the servers are connecting to who they think they are connecting to, then you need to disable hostname verification.  This is done in the WebLogic console for each of the following servers: AdminServer WSM1 WSM2 To disable hostname verification then go to the settings for the server, choose the SSL tab, and set Hostname Verification to “None”  in the Advanced section and save the settings for each server before applying the changes through the WebLogic change console.  After this you need to restart the Admin server. (EDG) Starting the WSM Cluster At this point you should have the node manager running on the first server and both servers configured with managed server domains.  We can now start the node manager on the second server (using the script you created).  Once the node manager is running on all servers then we can start WSM1 and WSM2 managed servers from the WebLogic console. Validate that they are successfully up and running and that the policies are deployed by going to http://wsm1:7010/wsm-pm and http://wsm2:7010/wsm-pm.  If you get a page not found error (404) then there may be a problem with your targeting of applications to the WSM cluster. (EDG) Configuring Java Object Cache for WSM Cluster Web Services Manager uses the Oracle Java Object Cache to provide distributed caching to improve performance.  Hopefully this will be replaced by Coherence in a future release.  In the meantime we need to configure it using WebLogic Scripting Tool (WLST) in $MW_HOME/Oracle_SOA/common/bin.  Issue the connect() command and provide the Admin server username (weblogic) and password as requested.  When prompted for the server URL enter t3://admin:7001 rather than using localhost as we told the Admin server to listen on hostname admin, not localhost.  Then execute the script to configure the cache, having checked that both WSM servers are up and running. execfile('/u01/app/oracle/product/fmw/oracle_common/bin/configure-joc.py') Accept the script defaults, when prompted for the CLuster Name enter the name of the cluster you created – I used WSM_Cluster.  When prompted for the Discover Port enter 9991 or any other unused port outside the reserved address range.  You should see the output describing the configuration of the two WSM servers. (EDG) Creating & Configuring Oracle HTTP Servers Having created an OWSM cluster we can now create the front end web server instances. Creating the OHS Server Instance We begin by running the configuration wizard for the web tier on each machine Web_Tier_ORACLE_HOME/bin/config.sh. Configure Components We are not using Oracle Web Cache so we can remove that from the list of components to configure.  We want to configure Oracle HTTP Server and Associate Selected Components with WebLogic Domain.  We could use Web Cache as a load balancer but I opted for a standalone load balancer. Specify WebLogic Domain We now need to specify the details of our WebLogic domain. Domain Host Name : admin Domain Port No : 7001 User Name : weblogic Password : <weblogic_password> The domain host name is the Admin Server hostname. Specify Component Details We now specify the location of the Oracle HTTP Server instance files.  We store this in the Admin directory. Instance Home Location : $ORACLE_BASE/admin/OHSN Instance Name : ohs_instanceN OHS Component Name : ohsN If we have two web servers then we will have component names oh1 and ohs2. Configure Ports Either provide a staticports.ini file for complete control over the ports or use the auto port configuration. Configure New Instance After specifying how or if we wish to be alerted to security updates we can review our configuration before clicking on Configure to create our new OHS instance. Difference from EDG Note that the EDG suggests creating the Web instance as part of the Web tier install and then registering WebLogic in a separate step.  I have done the install in one step and the instance creation and WebLogic registration in a second step.  This approach works better than the EDG suggestion if you are using a shared software directory and have more Web instances than software locations. (EDG) Configure OHS for SOA Now that we have our OHS instances we need to configure them to forward requests to the SOA Suite. Configure Virtual Hosts, Admin & EM Consoles & WSM Cluster First we set up the OHS to work with virtual hosts, the WebLogic and EM consoles and the WSM cluster.  We define three virtual hosts, one each for admin, internal and external access.  This allows us to configure different rules for these different hostnames.  We configure this by adding the following to the end of the httpd.conf file of each OHS instance ($ORACLE_BASE/admin/OHSn/config/OHS/ohsn/httpd.conf). ############# # SOA Suite # ############# <IfModule mod_weblogic.c>   MatchExpression /wsm-pm WebLogicCluster=wsm1:7010,wsm2:7010 </IfModule> ########################### # SOA Suite Virtual Hosts # ########################### NameVirtualHost *:7777 <VirtualHost *:7777>     ServerName soa-cluster.soa.oracle.com     ServerAlias soa-cluster     ServerAdmin you@your.address     RewriteEngine On     RewriteOptions inherit </VirtualHost> <VirtualHost *:7777>     ServerName soa-cluster-admin.soa.oracle.com     ServerAlias soa-cluster-admin     ServerAdmin you@your.address     RewriteEngine On     RewriteOptions inherit   # Admin Server and EM only through this virtual host   <Location /console>     SetHandler weblogic-handler     WebLogicHost admin     WeblogicPort 7001   </Location>   <Location /consolehelp>     SetHandler weblogic-handler      WebLogicHost admin     WeblogicPort 7001   </Location>   <Location /em>     SetHandler weblogic-handler     WebLogicHost admin     WeblogicPort 7001   </Location> </VirtualHost> <VirtualHost *:7777>     ServerName soa-cluster-internal.soa.oracle.com     ServerAlias soa-cluster-internal     ServerAdmin you@your.address     RewriteEngine On     RewriteOptions inherit </VirtualHost> ######################### # SOA Suite WSM Cluster # ######################### # WSM-PM <Location /wsm-pm>     SetHandler weblogic-handler     WebLogicCluster wsm1:7010,wsm2:7010 </Location> Note that we have departed from the EDG by putting our Location tags in the httpd.conf file so that we can restrict access to the consoles.  In particular the consoles can only be accessed through the soa-cluster-admin virtual host.  The ServerAlias’s allow us to use either canonical hostnames or short hostnames to access our virtual hosts. After editing this file we then need to restart the OHS on each machine: cd $ORACLE_BASE/admin/OHSn/bin./opmnctl restartproc process-type=OHSFinally we can check that we can access all three virtual hosts through the load balancer.https://soa-cluster.soa.oracle.com/wsm-pm http://soa-cluster-internal.soa.oracle.com http://soa-cluster-admin.soa.oracle.com Then check all three virtual hosts can access the WSM PM cluster.https://soa-cluster.soa.oracle.com/wsm-pm http://soa-cluster-internal.soa.oracle.com/wsm-pm http://soa-cluster-admin.soa.oracle.com/wsm-pm Finally we can check that the EM and WebLogic consoles are only accessible through the admin virtual host.https://soa-cluster.soa.oracle.com/console (should fail) http://soa-cluster-internal.soa.oracle.com/console (should fail) http://soa-cluster-admin.soa.oracle.com/console https://soa-cluster.soa.oracle.com/em (should fail) http://soa-cluster-internal.soa.oracle.com/em (should fail) http://soa-cluster-admin.soa.oracle.com/em Note that if you are using a similar configuration to mine in the load balancer then at this point you will find that attempts to access http://soa-cluster.soa.oracle.com will automatically redirect you to the secure version of the web site. (EDG and EDG)Configure Admin Server HTTP FrontendSometimes the WebLogic console needs to set up a canonical URL to refer to a page.  In this case it will construct the URL using the Frontend Host property in the Protocols HTTP section of the AdminServer settings, so we need to set this to point to the load balancer URL (soa-cluster-admin.soa.oracle.com) by using the console.  After activating the change then the console and enterprise manager can only be reliably accessed through the load balancer. (EDG)Configuring OS for Admin Server FailoverWe now have an OWSM cluster set up and running.  We would like to be able to test failover of the Admin Server.  As this requires shutting down the admin network interface on one server and starting it on another I have created some scripts to do this and configured the operating system to  allow this to be done by the oracle user without having to change to the root user.Releasing the Admin IP Address on Server 1After stopping the Admin Server the IP address can be released by issuing the commandsudo /sbin/ifconfig eth1:9 downFailure to release the IP address may lead to unusual errors or inability to assign the IP address to the second server. (EDG)Bringing Up Admin Server on Server 2We can now assign the Admin Server IP address on server 2 and start the Admin Server using our startup script we created earlier.  Note that you need to make sure you execute the arping command to flush the arp caches for things to work properly.We can now test that the admin server is still accessible. (EDG)Failing Back Admin Server to Server 1We can now fail back the Admin Server to server 1 using the same procedure that we followed to move it to server 2.Release IP on original server Assign IP on new new server Flush arp caches The admin server is then ready to run on server 1 again. (EDG)WSM Cluster Now CompleteWe now have a working WSM cluster and are ready for the next stage, configuring a SOA/BPM cluster which I will cover in the next post.

Configuring a SOA 11g PS2 SOA Cluster – Part III WSM Configuration In this post I will go through the steps required to set up a SOA cluster.  In previous posts I covered how to install RAC database...

Installing an 11g SOA Cluster – Part II Software Installation

Configuring an 11g PS2 SOA Cluster – Part II Software Installation In this post I will go through the software installation process for an 11g PS2 SOA cluster.  I will build on top of the environment described in my previous post. Target Software Directory Structure I am installing my middleware into a shared middleware home so I only need to install it once.  The final structure of my middleware software installation is shown below. Web Server The SOA cluster will host a couple of web servers for resiliency and these will load balance across the WebLogic servers.  I installed the 64-bit version of Oracle HTTP Server (OHS) as I was running on a 64-bit OS and would be using a 64-bit JVM.  It is a good idea to match the OHS to the JVM, either both 32-bit or both 64-bit.  The web server is part of the Oracle Web Utilities download. First I ran the installer for Web Tier Utilities 11.1.1.2.  I created an Oracle Inventory in /u01/app/oraInventory.  I selected software install only, not install and config.  Although the EDG says to do an install and config I find it easier to completely separate the configuration from the software installation because it allows me to do instance creation and WebLogic server registration as part of the same configuration step.  It also makes it easy to keep the software on a shared storage device, in my case I am only using one share storage area for the software although in production I would want two so that I could do a rolling upgrade. I installed the software on a single machine into the /u01/app/oracle/product/fmw middleware home that already contains my JRockit JVM and JRockit Mission Control.  I set the Oracle Home to be Oracle_WEB as this is the web utilities I am installing. After installing 11.1.1.2 I then ran the 11.1.1.3 upgrade, again choosing /u01/app/oracle/product/fmw as the middleware home and Oracle_WEB as the Oracle Home for web utilities.  This brings the web utilities up to 11g PS2. Oracle Home Registration Now I have the Web Server software installed I need to update the Oracle Inventory on the machines in the SOA cluster that I did not run the installer on.  I do this by executing the attachHome.sh script which is in the $ORACLE_HOME/oui/bin directory.  I need to do this for two Oracle Homes; the oracle_common home and the Oracle_WEB home.  Because the Oracle_WEB home depends on the oracle_common home we run the script for oracle_common home first.  I found that I needed to provide the jreLoc parameter for the oracle_common home. /u01/app/oracle/product/fmw/oracle_common/oui/bin/attachHome.sh    –jreLoc /u01/app/oracle/product/fmw/oracle_common/jdk We then run it for Oracle_WEB home. /u01/app/oracle/product/fmw/Oracle_WEB/oui/bin/attachHome.sh WebLogic Server Next thing to install is Oracle WebLogic Server 10.3.3.  Because I am using 64-bit JDK I chose to install generic WLS rather than platform specific. java –jar wls1033_generic.jar I created a “new” middleware home of my middleware home directory /u01/app/oracle/product/fmw and ignored the warning that there was already files in that directory.  I then selected the JRockit JDK as the JVM to be associated with the Middleware home and selected a typical install.  When the install was finished I unchecked the Run Quickstart option as I was not yet ready to create a WebLogic domain. BEA Home Registration Because we installed the WebLogic software onto a shared server we need to copy the beahome details from the oracle user directory on the machine on which we did the install to the other nodes in the SOA cluster. scp –r bea soa-cluster2:/home/oracle SOA Suite We are not ready to install the SOA Suite software.  Because SOA Suite 11g R1 PS2 is supplied as a patch set we must first install SOA Suite 11g R1 PS1.  When prompted for the JDK location I provided /u01/app/oracle/product/fmw/jrrt-4.0.1-1.6.0 which is where I installed JRockit.  After checking pre-requisites the installer should find the middleware home (/u01/app/oracle/product/fmw), if not provide the location where the WebLogic server and web tier are installed and choose a name for the Oracle SOA home, I chose Oracle_SOA. After installing 11.1.1.2 I then ran the 11.1.1.3 upgrade (SOA Suite 11g R1 PS2), again choosing /u01/app/oracle/product/fmw as the middleware home and Oracle_SOA as the Oracle Home for SOA.  This brings the SOA Suite up to 11g R1 PS2. Note that in addition to upgrading the SOA Suite (BPEL PM, Mediator, Rules, B2B, Human Workflow, BAM and Enterprise Manager), the PS2 patch set also installs the BPM Suite. Oracle Home Registration Now I have the SOA software installed I again need to update the Oracle Inventory on the machines in the SOA cluster that I did not run the installer on.  I do this by executing the attachHome.sh script which is in the $ORACLE_HOME/oui/bin directory.  I need to do this for Oracle SOA Home Oracle_SOA. /u01/app/oracle/product/fmw/Oracle_SOA/oui/bin/attachHome.sh Service Bus Final piece of software to install is the service bus.  You need to choose a custom installation because you do not want the Oracle Service Bus IDE, so uncheck this option.  You can install the Oracle Service Bus IDE if you installed OEPE (Oracle Enterprise Pack for Eclipse) which can be installed standalone or with non-generic WebLogic Server installs.  You can only install the examples if you have installed the samples database which is again an option with non-generic WebLogic Server Installs.  So after choosing custom installation I made sure that the Service Bus IDE and Service Bus Samples options were not checked.  After passing the pre-requisite checks I specified the same Middleware home (/u01/app/oracle/product/fmw) as before, this time selecting ORACLE_OSB as the Oracle Home and adding in the WebLogic server install location (u01/app/oracle/product/fmw/wlserver_10.3), note that all these values should be set by default. Oracle Home Registration Now I have the OSB software installed I again need to update the Oracle Inventory on the machines in the SOA cluster that I did not run the installer on.  I do this by executing the attachHome.sh script which is in the $ORACLE_HOME/oui/bin directory.  I need to do this for Oracle OSB Home Oracle_OSB. /u01/app/oracle/product/fmw/Oracle_OSB/oui/bin/attachHome.sh Summary We have now installed the software for Oracle HTTP Server (part of the Fusion Middleware Web Utilities) Oracle WebLogic Server Oracle SOA Suite (includes software for Oracle Business Process Management Suite) Oracle Service Bus All this software is installed into a common Fusion Middleware home on a shared storage system.  In production I would want to have two copies of this, although I could achieve this by cloning the volume on the NAS device. In my next post I will look at the steps involved in configuring a SOA cluster. References Enterprise Deployment Guide for Oracle SOA Suite Installing Oracle HTTP Server Installing SOA Suite Installation Guide for Oracle Service Bus Installing Oracle Service Bus Attaching Oracle Homes to Central Inventory and Identifying BEA Home

Configuring an 11g PS2 SOA Cluster – Part II Software Installation In this post I will go through the software installation process for an 11g PS2 SOA cluster.  I will build on top of the environment...

Installing an 11g SOA Cluster – Part I Preparation

Configuring a SOA Cluster – Part I Preparation In this post I will go through the initial steps required to create a SOA cluster.  I will use the RAC database created in the previous posting ‘Off the RAC’.  We will follow the Enterprise Deployment Guide and along the way give some explanation as to why things are being done the way they are. Target The target configuration we are aiming at is shown below with the SOA Servers running on Oracle Enterprise Linux 5.5.  We will use the same openFiler NAS device as we used for the RAC database. We will create two SOA Servers – SOA-Cluster1 and SOA-Cluster2.  The two SOA Servers will use the Internal LAN for access to shared file storage, the external LAN is used for access to the RAC cluster, inter-cluster communication and access to the SOA Servers by the load balancer LB.  The public WAN is used by all clients of the SOA cluster. I am using two physical machines to host six virtual machines running under Oracle Virtual Box.  The two physical machines have 8G Memory each.  The RAC cluster and NAS device run on one physical machine, the load balancer and SOA cluster will run on the other physical machine. NFS Preparation I wanted to keep the software on a shared disk, in addition the EDG requires that SOA cluster files such as transaction logs and JMS queue files are kept on shared storage in addition to the Admin server domain files being kept on shared storage.  To support this I created three shares on the OpenFiler NAS device in their own logical volume. Volume Size Share Location Description fmw 10GB /mnt/soa/fmw/share Middleware Software aserver 2GB /mnt/soa/aserver/share Admin Server Domain Config soacluster 2GB /mnt/soa/cluster/share SOA Shared Cluster Files The shares were configured with public guest access and RW access permissions.  UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024). OS Preparation The first step was to install the OS and configure it to use yum.  After updating packages to the latest revisions I can then apply the packages needed by SOA. yum install binutils  compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc    glibc-common glibc-devel libaio libaio-devel libgcc libstdc++ libstdc++-devel make sysstat I also modified the /etc/sysconfig/ntpd file to add a –x flag at the start of the options to allow clock slew. Users I then created the following user and appropriate groups User Default Group Groups oracle oinstall oinstall, oracle I also added the following to the .bash_profile. # Oracle SettingsTMP=/tmp; export TMPTMPDIR=$TMP; export TMPDIRif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifiNetworkI actually set up three network cards on my Linux servers.eth0 – DHCP configured to allow access to outside world.eth1 – dedicated to external LAN, this is used to reach SOA servers and has fixed IP addresses and is also used for the floating IP addresses required by SOA servers.  It is also used to access the RAC cluster.eth2 – dedicated to internal LAN, this is only used to access the NAS filer and has fixed IP addresses. So each SOA server had a DHCP address, a fixed IP address on the external LAN and a fixed IP address on the internal LAN.I provided the following hosts file.# Do not remove the following line, or various programs# that require network functionality will fail. 127.0.0.1       localhost.localdomain localhost 10.0.4.15       soa-cluster1.soa.oracle.com soa-cluster1 10.0.4.16       soa-cluster2.soa.oracle.com soa-cluster2 ####### # RAC # ####### 10.0.4.200      nas1.soa.oracle.com     nas1 10.0.4.210      rac1.soa.oracle.com     rac1 10.0.4.220      rac2.soa.oracle.com     rac2 10.0.3.200      rac-scan.soa.oracle.com rac-scan 10.0.3.201      rac1-vip.soa.oracle.com rac1-vip 10.0.3.202      rac2-vip.soa.oracle.com rac2-vip ####### # SOA # ####### # FIXED 10.0.3.1         lb.soa.oracle.com lb 10.0.3.15       wsm1.soa.oracle.com wsm1 10.0.3.16       wsm2.soa.oracle.com wsm2 10.0.3.16       bam2.soa.oracle.com bam2 10.0.3.15       osb1.soa.oracle.com osb1 10.0.3.16       osb2.soa.oracle.com osb2 10.0.3.15       web1.soa.oracle.com web1 10.0.3.16       web2.soa.oracle.com web2 # FLOATING 10.0.3.100      admin.soa.oracle.com admin 10.0.3.101      bam1.soa.oracle.com bam1 10.0.3.111      soa1.soa.oracle.com soa1 10.0.3.112      soa2.soa.oracle.com soa2 # VIRTUAL 192.168.1.10   soa-cluster.soa.oracle.com soa-cluster 192.168.1.10   soa-cluster-admin.soa.oracle.com soa-cluster-admin 192.168.1.10   soa-cluster-internal.soa.oracle.com soa-cluster-internalThe RAC section provides the names of the RAC servers, only the rac1-vip and rac2-vip addresses are actually needed.  The SOA section provides the addresses needed by the clustered SOA environment.  The fixed addresses provide hostnames for the web servers, web services policy managers, BAM report servers and OSB servers.  The floating IP address for the admin server must be manually managed and allows for the admin server to be moved between machines.  The other floating IP addresses are managed by the node manager and are used to support whole server migration of the SOA servers and the BAM active data cache.  The virtual IP addresses are the IP addresses used by the load balancer to provide access to the SOA cluster for admin users, internal users and external users.  The use of different virtual addresses allows the load balancer to restrict access to certain services based on the source of the user.I use separate hostnames for all the different managed servers to make it easier to move them between machines.  To run a managed server on a different machine I just need to change the target machine in WebLogic and change the IP address in the /etc/hosts file.Note that the IP addresses used by RAC and SOA Suite components are only accessible on the internal and external LANs, they are not routable outside of that environment.  This is a good security practice.  The only way to access the SOA and RAC servers is to be on the external LAN or to go through the load balancer.  The virtual addresses used by the load balancer should be routable (I have replaced my real IP addresses used by the load balancer with a different address to avoid exposing Oracle internal addresses)File StructureI created the following file structure on the Linux servers.  Folders in bold are mount points for shared files. Ownership of the entire /u01 sub-tree was given to oracle in group oinstall (chown –R oracle:oinstall /u01).  Permissions were set to 775 (chmod –R 775 /u01).The aserver folder is used to hold master cluster config and is used by the admin server, putting it on shared storage allows the admin server to be run on any host in the cluster.The fmw folder is used to hold all software.  Putting it into shared storage means that the software only needs to be installed once for the cluster (or twice if you follow the recommendation to have two shared volumes for software to allow for shared storage failure).The soa-cluster folder is used to hold transaction logs, JMS queues, deployment descriptors and file adapter control files.   Putting these items onto shared storage allows for whole server migration (JMS and transaction logs), allows for co-ordination of adapters accessing shared resources (file adapter) and simplifies adapter configuration (deployment plans are accessible to all nodes).NFS ClientI added the following entries to the /etc/fstab file to enable the RAC servers to mount the shared NFS file system.nas1:/mnt/soa/fmw/share /u01/app/oracle/product/fmw  nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0 nas1:/mnt/soa/aserver/share   /u01/app/oracle/admin/soa-domain/aserver  nfs   rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0 nas1:/mnt/soa/cluster/share   /u01/app/oracle/admin/soa-domain/soa-cluster  nfs   rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0After mounting the NFS directories it was necessary to rerun the chown and chmod commands executed earlier to set permissions correctly on the NFS folders.  If you get a permission denied error make sure that you set no_root_squash on all the shares.JavaI am using 64-bit Linux and so I want to use a 64-bit JDK.  Oracle Fusion Middleware only ships with 32-bit JDKs so it is necessary to install the 64-bit JDK separately and use the generic WebLogic installer.  I downloaded JRockit and installed it as the Oracle user in /u01/app/oracle/product/fmw/jrrt-4.0.1-1.6.0.  It only needs to be installed on one node as we are installing it to a shared location.  Instead of JRockit I could install Oracle Hotspot JVM.In addition to installing JRockit I also installed JRockit Mission Control as the oracle user in /u01/app/oracle/product/fmw/jrmc-4.0.1-1.6.0 to assist in diagnosing any Java related problems.  Again it only needs to be installed once to be available to all nodes in the cluster.After installing the JDK I edited the jre/lib/security/java.security file and changed the reference to /dev/urandom to be /dev/./urandom.  This may improve SOA startup times by using a pseudo-random generator rather than a random generator.BackgroundMy final OS preparation step was to set the desktop background differently on each machine so that I knew what machine I was on just by seeing the background.  Helps to avoid unfortunate incidents of doing the wrong thing on the wrong machine.Load BalancerI used the Zeus Traffic Manager as a load balancer.  I added the following to the /etc/hosts file for the load balancer;10.0.3.1            lb.soa.oracle.com lb10.0.3.15          web1.soa.oracle.com web1 10.0.3.16          web2.soa.oracle.com web2 192.168.1.10   soa-cluster.soa.oracle.com soa-cluster 192.168.1.10   soa-cluster-admin.soa.oracle.com soa-cluster-admin 192.168.1.10   soa-cluster-internal.soa.oracle.com soa-cluster-internal Normally the soa-cluster internal, external and admin virtual hosts would have separate virtual IP addresses, however I was limited in the number of fixed IP addresses I had so I used the traffic manager to treat the three hostnames as three different sites.Server PoolsI configured a pool called “SOA Cluster Pool” with two servers, web1:7777 and web2:7777.  This server pool consists of the web servers configured to front end the SOA Suite.RulesI then created some rules to control access to back end URLs.Restrict Host NamesThis rule only allows requests that have the correct hostname to be forwarded to the SOA cluster.  If the wrong hostname is used it will reply with a message indicating which host names can be used and how to add them to a Windows machine.  This rule enforces the internal/external/admin separation by making sure that the request is targeted only at one of these three hostnames.$headerHost = http.getHostHeader(); if( $headerHost != "soa-cluster.soa.oracle.com"     && $headerHost != "soa-cluster"     && $headerHost != "soa-cluster-internal.soa.oracle.com"     && $headerHost != "soa-cluster-internal"     && $headerHost != "soa-cluster-admin.soa.oracle.com"     && $headerHost != "soa-cluster-admin"   ){     http.sendResponse( "403 Permission Denied",     "text/html",     "Access not allowed using hostname ".         $headerHost."<BR>\n".         "Please use <A href="\"https://soa-cluster.soa.oracle.com".http.getPath()."\">soa-cluster.soa.oracle.com</A>," <A href="\"http://internal-soa-cluster.soa.oracle.com".http.getPath()."\">internal-soa-cluster.soa.oracle.com</A>" or <A href="\"http://admin-soa-cluster.soa.oracle.com".http.getPath()."\">admin-soa-cluster.soa.oracle.com</A>" as appropriate.<BR>\n".         "To access these host names add the following to your hosts file (Linux <A href="\"file:///etc/hosts\">/etc/hosts</A>" or Windows <A href="\"file:///C:/windows/system32/drivers/etc/hosts\">C:\\windows\\system32\\drivers\\etc\\hosts</A>).<BR>\n<HR>\n"."         "10.148.44.206\tsoa-cluster.soa.oracle.com soa-cluster<BR>\n".         "10.148.44.206\tsoa-cluster-admin.soa.oracle.com soa-cluster-admin<BR>\n".         "10.148.44.206\tsoa-cluster-internal.soa.oracle.com soa-cluster-internal<BR>\n",     "" ); }Deny console and emThis rule allows access to the /em and /console paths only if the target host is soa-cluster-admin, in a real deployment the soa-cluster-admin address would only be available internally, and even then may be restricted to an admin LAN.$hostname = http.getHostHeader(); if( !string.startswith( $hostname, "soa-cluster-admin" ) ) {      $path = http.getPath();      if( string.startswith( $path, "/em" )         || string.startswith( $path, "/console" ) ){          http.sendResponse( "403 Permission Denied", "text/html", "No access to admin functions on this host.", "");      } }Redirect External to SSL This rule forces all access to the external hostname to be SSL by redirecting all non-SSL traffic sent to the external hostname to the SSL port.  In a real deployment the firewall would only allow SSL traffic through from external clients.$headerHost = http.getHostHeader(); if( $headerHost == "soa-cluster.soa.oracle.com"     || $headerHost == "soa-cluster" ){      http.changeSite( "https://soa-cluster.soa.oracle.com:443" ); } Virtual ServersI then created the following virtual servers:Virtual Server Name Listen Address Rules Pool External SOA Cluster soa-cluster.soa.oracle.com:443 Restrict Host NamesDeny console and em SOA Cluster Pool Internal SOA Cluster soa-cluster-internal.soa.oracle.com:80 Restrict Host NamesRedirect External to SSL Deny console and em SOA Cluster Pool Note there is no virtual server for Admin SOA Cluster.  This is because virtual servers in Zeus are IP address/port number based and the Admin and Internal SOA cluster use the same IP and port number.  The Deny console and em rule prevents requests to the internal or external SOA clusters from accessing the em and console paths, and hence denies them access to admin functions.DB PreparationThe RAC database must be configured for use by the SOA cluster as outlined in the EDG.Service CreationTo do this we first create two database services; soaedg.soa.oracle.com and bamedg.soa.oracle.com.  This allows us to control the database resources allocated to SOA Suite.  We create the services with the following SQL commandsEXECUTE DBMS_SERVICE.CREATE_SERVICE(SERVICE_NAME => 'soaedg.soa.oracle.com', NETWORK_NAME => 'soaedg.soa.oracle.com');EXECUTE DBMS_SERVICE.CREATE_SERVICE(SERVICE_NAME => 'bamedg.soa.oracle.com', NETWORK_NAME => 'bamedg.soa.oracle.com'); After adding the service to the database we then assign it to the instances and start it using srvctl: srvctl add service -d rac -s soaedg -r rac1,rac2srvctl add service -d rac -s bamedg -r rac1,rac2 srvctl start service -d rac -s soaedgsrvctl start service -d rac -s bamedgOnce added the services will automatically start with the database.Process & Session LimitsThe SOA Suite is a database session hog.  To run efficiently it needs a large number of sessions.  This is configured by setting the processes parameter (assuming you are not using MTS, if using MTS then set the Sessions parameter rather than the Processes parameter).  Alter the number of processes using the following SQL command.ALTER SYSTEM SET PROCESSES=400 SCOPE=SPFILEThe Enterprise Deployment Guide recommends 300 processes for SOA and another 100 for BAM, hence 400 for SOA & BAM together.  Note that this is in addition to any other processes requirements.Repository CreationWith the database set up we can run the Repository Configuration Utility (RCU) (rcuHome/bin/rcu) to create the schemas required by the SOA Suite,  Select the SOA Infrastructure and it should also choose the AS Common Schemas by default.  If you don’t use BAM you can deselect that option but it doesn’t hurt to install it just in case you change your mind.We are asked to provide the database details, provide either the rac-scan address of the RAC cluster or one of the RAC VIPs, rac1-vip or rac2-vip.  The service name should be the newly created service name.  You will need an account with SYDBA privileges to run this.When asked to provide a prefix it is easiest to use the DEV prefix as this is what is assumed in the SOA domain creation wizard.  The prefix is provided to allow you to have multiple SOA installations in the same database.  If you don’t need to do this then stick with DEV as your prefix.I found that it took 2 minutes to create the tablespaces on my cluster and 13 minutes to create the schemas.XA SupportIt is necessary to grant transactional management privileges to the soainfra user with the following SQL commands which must be run with sysdba privileges:Grant select on sys.dba_pending_transactions to dev_soainfra;Grant force any transaction to dev_soainfra;XA is heavily used in the SOA Suite and failing to set this will cause problems recovering transactions after a crash.SummaryAfter configuring our NAS device, our load balancer, the host OS and the database we are now ready to install and configure our SOA cluster.  I will look at that in my next post.ReferencesEnterprise Deployment Guide for Oracle SOA Suite Enterprise Deployment Overview Database and Environment Preconfiguration Oracle Fusion Middleware Release Notes SecureRandom class reads from /dev/random even if /dev/urandom specified

Configuring a SOA Cluster – Part I Preparation In this post I will go through the initial steps required to create a SOA cluster.  I will use the RAC database created in the previous posting ‘Off the...

Off the RAC

Configuring a RAC Cluster for SOA To get the highest availability for a SOA cluster the backend database needs to be highly available.  So in this post I will go through the minimum requirements to get a RAC cluster up and running ready for use by SOA.  Note that this configuration is not suitable for production but is useful to enable you to develop and test in an environment that is similar to production. Target I decided to go for an 11gR2 RAC cluster running on Oracle Enterprise Linux 5.5.  I used two Linux servers for database machines and OpenFiler for the NFS server to provide shared storage.  I created all these as images under Virtual Box. NFS Preparation I brought up OpenFiler and after initial configuration to use the Internal RAC LAN I created a single volume group (rac) and then created the following volumes with associated shares. Volume Size Share Location Description db 10GB /mnt/rac/db/share RAC Database Software grid 10GB /mnt/rac/grid/share RAC Grid Software cluster 1.5GB /mnt/rac/cluster/share RAC Cluster Files data 10GB /mnt/rac/data/share RAC Data Files The shares were configured with public guest access and RW access permissions.  UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024). OS Preparation The first step was to install the OS and configure it to use yum.  After updating packages to the latest revisions I can then apply the packages needed by RAC.  The easiest way to apply the required packages was to install the package oracle-validated (yum install oracle-validated) as this automatically installs all required packages for RAC and sets the necessary system parameters. I also modified the /etc/sysconfig/ntpd file to add a –x flag at the start of the options to allow clock slew. Users I then created the following user and appropriate groups User Default Group Groups oracle oinstall oinstall, oracle, dba I also added the following to the .bash_profile, changing the ORACLE_HOSTNAME and ORACLE_SID as appropriate. # Oracle SettingsTMP=/tmp; export TMPTMPDIR=$TMP; export TMPDIRORACLE_HOSTNAME=rac1.soa.oracle.com; export ORACLE_HOSTNAMEORACLE_UNQNAME=rac; export ORACLE_UNQNAMEORACLE_BASE=/u01/app/oracle; export ORACLE_BASEORACLE_HOME=$ORACLE_BASE/product/11gR2/db; export ORACLE_HOMEORACLE_SID=rac1; export ORACLE_SIDORACLE_TERM=xterm; export ORACLE_TERMPATH=/usr/sbin:$PATH; export PATHPATH=$ORACLE_HOME/bin:$PATH; export PATHLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATHCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifiNetworkI actually set up three network cards on my Linux servers.eth0 – DHCP configured to allow access to outside world eth1 – dedicated to external RAC LAN, this is only reachable by SOA servers and has fixed IP addresses and is also used for the floating IP addresses required by RAC listeners. eth2 – dedicated to internal RAC LAN, this is only reachable by RAC servers and has fixed IP addresses.  It also provides access to the storage device. So each RAC server had a DHCP address, a fixed IP address on the external RAC LAN and a fixed IP address on the internal RAC LAN.I provided the following hosts file.# Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1               localhost.localdomain localhost # ::1           localhost6.localdomain6 localhost6 ####### # RAC # ####### 10.0.4.200      nas1.soa.oracle.com     nas1 10.0.4.210      rac1.soa.oracle.com     rac1 10.0.4.220      rac2.soa.oracle.com     rac2 10.0.3.200      rac-scan.soa.oracle.com rac-scan 10.0.3.201      rac1-vip.soa.oracle.com rac1-vip # rac1 is on this network at 10.0.3.210 10.0.3.202      rac2-vip.soa.oracle.com rac2-vip# rac2 is on this network at 10.0.3.220The last three addresses are dynamically registered by the grid services layer and are used by network listeners.  These is the addresses that RAC will expose to the SOA Suite.File StructureI created the following file structure on the Linux servers.  Folders in bold are mount points for shared files. Ownership of the entire /u01 sub-tree was given to oracle in group oinstall (chown –R oracle:oinstall /u01).  Permissions were set to 775 (chmod –R 775 /u01).NFS ClientI added the following entries to the /etc/fstab file to enable the RAC servers to mount the shared NFS file system.nas1:/rac/mnt/grid/share        /u01/app/11gR2/grid                                 nfs       rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0 nas1:/rac/mnt/DB/share          /u01/app/oracle/product/11gR2/db        nfs       rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0 nas1:/rac/mnt/cluster/share   /u01/cluster                                                 nfs       rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0 nas1:/rac/mnt/data/share       /u01/oradata                                                nfs       rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0After mounting the NFS directories it was necessary to rerun the chown and chmod commands executed earlier to set permissions correctly on the NFS folders.BackgroundMy final OS preparation step was to set the desktop background differently on each machine so that I knew what machine I was on just by seeing the background.  Helps to avoid unfortunate incidents of doing the wrong thing on the wrong machine.SnapshotHaving prepared everything I shut down the 3 virtual machines (nas1, rac1 and rac2) and took a snapshot of the virtual images, labeling them pre-grid.  Then if there were problems later I could revert to the configuration just before installing any software.  When starting the virtual machines I always started the OPenFiler first so that the rac servers would be able to find it.Grid InstallWith the OS prepared I logged in as oracle user and kicked off the grid install choosing the advanced install option.  I identified my nodes as rac1 and rac2 and the internal RAC network as the private interface and the external RAC network as the public interface.  I used the shared file system storage option and claimed external redundancy.  Setting the OCR file location to /u01/cluster/storage/ocr and the voting disk location as /u01/cluster/storage/vdsk.  I installed the software onto the shared disk at /u01/app/11gR2/grid.  The install automatically installs the software on both rac nodes.During the verification you may find you are still missing a couple of packages and some settings may not be correct.  The packages can be added using yum without aborting the install and the installer generates root scripts to adjust any parameters that need modifying.SnapshotAfter installing the grid software I again shut down all the servers and took a snapshot of each of them, labeling it grid.DB InstallWith the cluster services installed and running I logged in as oracle user and kicked off the database install choosing the database software only option and selecting a RAC install on the rac1 and rac2 nodes.  I identified the software location as /u01/app/product/11gR2/db.SnapshotAfter installing the database software I again shut down all the servers and took a snapshot of each of them, labeling it db.Database CreationWith the database software installed I ran the $ORACLE_HOME/bin/dbca utility to create a RAC database.  I chose advanced install and selected rac1 and rac2 nodes and chose the AL32UTF8 character set.  On my machine the database configuration wizard took about 10 hours to complete, but it did finish successfully.SnapshotAfter creating the database I shut it down using srvctl stop database –d rac and then shut down all the servers and took a snapshot of each of them, labeling it rac.  At this point I deleted some of the earlier snapshots to reduce disk usage and potentially improve performance a little in the virtual machines.Next StepsWith a RAC database available I am now ready to install and configure a SOA cluster which I will cover in the next few postings.ReferencesI found the following resources very helpful:Oracle Database 11g Release 2 RAC On Linux Using NFS Configure OpenFiler 2.3 as a basic NFS Server openFiler Downloads Oracle Database Downloads Oracle Grid Infrastructure for Linux Downloads

Configuring a RAC Cluster for SOA To get the highest availability for a SOA cluster the backend database needs to be highly available.  So in this post I will go through the minimum requirements to get...

SoapServerURL and SoapCallbackURL Explained in 10.1.3

Within the BPEL process manager there are two properties that control the URLs used to invoke BPEL instances, the soapServerUrl and the soapCallbackUrl.  In this post I will explore the meaning of these two properties.  These properties become very important when setting up secure environments or HA environments and understanding how they are used is important to any BPEL administrator. soapCallbackUrl This is used by the BPEL engine to construct the callback address on an asynchronous interaction.  This is when the BPEL engine will call a service and then expect that service to call it back.  In the request to the service the BPEL engine will provide a callback address so that the service knows where to send the response back to. This is all done using WS-Addressing as shown in the sample message sent from BPEL below: <env:Envelope env:encodingStyle="" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <env:Header> <ReplyTo xmlns="http://schemas.xmlsoap.org/ws/2003/03/addressing"> <Address>http://soa.vm.oracle.com:7777/orabpel/default/AsyncEchoClientProcess/1.0/AsyncEchoProcess/AsyncEchoProcessRequester </Address> <PortType xmlns:ptns="http://xmlns.oracle.com/AsyncEchoProcess"> ptns:AsyncEchoProcessCallback </PortType> <ServiceName xmlns:snns="http://xmlns.oracle.com/AsyncEchoProcess"> snns:AsyncEchoProcessCallbackService </ServiceName> </ReplyTo> <MessageID ans1:rootId="40004" ans1:parentId="40004" ans1:priority="0" xmlns="http://schemas.xmlsoap.org/ws/2003/03/addressing" xmlns:ans1="http://schemas.oracle.com/bpel"> bpel://localhost/default/AsyncEchoClientProcess~1.0/40004-BpInv0-BpSeq0.3-3 </MessageID> </env:Header> <env:Body> … </env:Body></env:Envelope>Note that /env:Envelope/env:Header/ReplyTo/Address is of the form{soapCallbackUrl}/orabpel/{domainName}/{ProcessName}/{ProcessRevision}/{PartnerLink}/{PartnerLinkRole}The important bit is that the first part of the ReplyTo Address is formed from the soapCallbackUrl.Important PointThe hostname in the soapCallbackUrl should be resolvable (can be converted to an IP address) and routable (IP packets can be delivered to it) from any service which is expected to provide an async callback to BPEL.  If this is not the case then the BPEL process must override these values before invoking the service from which it expects a callback.  Note that this may be the address of a front end web server or load balancer and not the address of the BPEL server.Changing the soapCallbackUrlThe soapCallbackUrl is set by default to the hostname of the machine on which BPEL was installed.  It can be changed by going into the BPEL Admin console (http://hostname:port/BPELAdmin).Any changes only take effect after a reboot of the BPEL server.soapServerUrlThis is used by the BPEL engine in a number of different ways.  These are each outlined in the sections below.  Basically the soapServerUrl is used to tell the BPEL server the address at which clients can find it.  This may be the address of the front end web server or load balancer and so is not necessarily the address of the BPEL server.Important PointLike the soapCallbackUrl, the hostname in the soapServerUrl should be resolvable (can be converted to an IP address) and routable (IP packets can be delivered to it) from any system which is expected to be a client of BPEL and also from the BPEL server itself.  If this is not the case then the client must override these values before invoking the service.  Note that this may be the address of a front end web server or load balancer and not the address of the BPEL server.Startup ValidationDuring startup of the BPEL engine it will try to access {soapServerUrl}/orabpel/xmllib/ws-addressing.xsd to validate that it can access its own address.  Version 10.1.3.5 will still start if it is unable to access this xsd but it will log an error about being unable to access this URL.  This error should be taken seriously as it indicates a problem with your infrastructure.Setting Endpoint and Imports in WSDL on DeploymentWhen a BPEL process is deployed the abstract WSDL used to describe the interface to the process is supplemented with binding information to turn it into a concrete WSDL.  In particular the location attribute of the address element is set to {soapServerUrl}/orabpel/{domain}/{process}/{revision}.  This is shown below.<service name="EchoProcess"> <port name="EchoProcessPort" binding="tns:EchoProcessBinding"> <soap:address location="http://soa.vm.oracle.com:7777/orabpel/default/EchoProcess/1.0"/> </port></service>Processes deployed before a change to the soapServerUrl property will keep the old location value in their WSDL.  This means that when a client retrieves the WSDL of the process, even if it retrieves the WSDL from the new soapServerUrl it will still be told to call the service at the old soapServerUrl location.Processes that support asynchronous interactions also have additional message types added to their WSDL to support ws-addressing.  Part of these additions is the inclusion of an import statement that includes the ws-addressing.xsd file.  This file is referenced as an absolute URL so that it is not dependent on the location used to retrieve the WSDL.<schema xmlns="http://www.w3.org/2001/XMLSchema"> <import namespace="http://schemas.xmlsoap.org/ws/2003/03/addressing" schemaLocation="http://soa.vm.oracle.com:7777/orabpel/xmllib/ws-addressing.xsd"/></schema>This import URL is of the form {soapServerUrl}/orabpel/xmllib/ws-addressing.xsd, this is the same URL used to validate access to the BPEL server.Accessing WSDL from ConsoleWhen you try to invoke a BPEL process directly from the console using the Initiate tab then the BPEL console constructs the URL to retrieve the WSDL of the process using the soapServerUrl.  If this is incorrect then the initiate tab will throw an error similar the following:WSDLException: faultCode=PARSER_ERROR: Failed to read wsdl file at: "{soapServerUrl}/orabpel/default/VacationRequest/1.0/VacationRequest?wsdl", caused by: java.net.UnknownHostException. : java.net.UnknownHostException: {soapServerUrl hostname}: {soapServerUrl hostname}This will also cause an error when you try to access the WSDL from the WSDL tab of the process.If this problem occurs then you need to correct your system so that the soapServerUrl is accessible from the BPEL server machine.SOAP Optimisation ProcessingWhen invoking a SOAP service the BPEL engine will check if the location of the target service starts with {soapServerUrl}/orabpel.  If it does then it will use an optimised transport, bypassing the HTTP stack and making the call via a Java API instead.  This behaviour can be disabled by setting the optSoapShortcut property on the partner link to be false.Changing the soapServerUrlThe soapServerUrl is set by default to the hostname of the machine on which BPEL was installed.  It can be changed by going into the BPEL Admin console (http://hostname:port/BPELAdmin).Any changes only take effect after a reboot of the BPEL server.  The wrinkle to this is that when a process is deployed it stores its WSDL using the current soapServerUrl but the BPEL server serves up the location using the soapServerUrl in effect at the time the BPEL engine was started until the BPEL server is rebooted.  The net effect is that if I change my soapServerUrl and deploy a process I see that it is using the old soapServerUrl, but then I reboot the server and find that the same process is now using the new soapServerUrl.  But BPEL processes deployed when the old soapServerUrl was in effect keep their locations as the old soapServerUrl.Things to KnowSome important things to know:soapCallbackUrl is separate from soapServerUrl to allow different communication channels for processes that have started from processes that are not yet initiated. Most customers should set soapServerUrl and soapCallbackUrl to be the same value. soapCallbackUrl must be resolvable and routable by services that will callback into the BPEL server, note that this may include BPEL processes or human workflow. soapServerUrl must be resolvable and routable by BPEL server itself and by clients of the BPEL server. Changes to soapServerUrl does not effect existing process endpoints. I hope this was useful, it seems that I have fielded a number calls about this recently and so felt that some explanation was necessary.

Within the BPEL process manager there are two properties that control the URLs used to invoke BPEL instances, the soapServerUrl and the soapCallbackUrl.  In this post I will explore the meaning...

SOA Suite 11g R1 Developers Guide Available

Matt & I have just had the 11g version of our SOA Suite Developers Guide published by Packt publishing.  More than 40% of the book is new content, including guidance on how to use the new rules editor and the Event Delivery Network.  When we started writing together our original target was the 11g product, but along came the BEA acquisition and 11g was delayed so we re-focused the book on 10g.  Of course a few months after the 10g book was published 11g came out and so we dusted off the earliest chapters, updated them and started again.  So in some ways this book was what we meant to write when we started. While writing the last book Matt emigrated to Australia.  So to keep up with him while writing this book I moved to Colorado.  Matt, not to be outdone, responded by leaving Oracle and starting his own company.  Our families are now very worried what we might do if we start writing another book together. The editorial team at Packt were very good at chasing us to finish the book, without them we would probably only be half-way through.  We had some great reviewers who provided very insightful comments and often pointed us in the right direction when we had lost our way.  You may well be familiar with some of the reviewers from their blogs or participation at industry events; John Deeb Hans Forbrich Bill Hicks Marc Kelderman Manoj Neelapu ShuXuan Nie Hajo Normann In addition to the reviewers we had a lot of support from product management at Oracle, I found Clemens Utschig particularly helpful. I hope you enjoy the book, we have tried to make it into a guide for practitioners of SOA, and endeavored to explain how and why to use different parts of the SOA Suite.  Let us know what you think.

Matt & I have just had the 11g version of our SOA Suite Developers Guidepublished by Packt publishing.  More than 40% of the book is new content, including guidance on how to use the new rules...

Building a SOA/BPM/BAM Cluster Part I – Preparing the Environment

An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.2 Provides Web Server.  11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be...

SOA Suite 11g Releases

A few years ago Mars renamed one of the most popular chocolate bars in England from Marathon to Snickers.  Even today there are still some people confused by the name change and refer to them as marathons. Well last week we released SOA Suite 11.1.1.3 and BPM Suite 11.1.1.3 as well as OSB 11.1.1.3.  Seems that some people are a little confused by the naming and how to install these new versions, probably the same Brits who call Snickers a Marathon :-).  Seems that calling all the revisions 11g Release 1 has caused confusion.  To help these people I have created a little diagram to show how you can get the latest version onto your machine.  The dotted lines indicate dependencies. Note that SOA Suite 11.1.1.3 and BPM 11.1.1.3 are provided as a patch that is applied to SOA Suite 11.1.1.2.  For a new install there is no need to run the 11.1.1.2 RCU, you can run the 11.1.1.3 RCU directly. All SOA & BPM Suite 11g installations are built on a WebLogic Server base.  The WebLogic 11g Release 1 version is 10.3 with an additional number indicating the revision.  Similarly the 11g Release 1 SOA Suite, Service Bus and BPM Suite have a version 11.1.1 with an additional number indicating the revision.  The final revision number should match the final revision in the WebLogic Server version.  The products are also sometimes identified by a Patch Set number, indicating whether this is the 11gR1 product with the first or second patch set.  The table below show the different revisions with their alias. Product Version Base WebLogic Alias SOA Suite 11gR1 11.1.1.1 10.3.1 Release 1 or R1 SOA Suite 11gR1 11.1.1.2 10.3.2 Patch Set 1 or PS1 SOA Suite 11gR1 11.1.1.3 10.3.3 Patch Set 2 or PS2 BPM Suite 11gR1 11.1.1.3 10.3.3 Release 1 or R1 OSB 11gR1 11.1.1.3 10.3.3 Release 1 or R1 Hope this helps some people, if you find it useful you could always send me a Marathon bar, sorry Snickers!

A few years ago Mars renamed one of the most popular chocolate bars in England from Marathon to Snickers.  Even today there are still some people confused by the name change and refer to them as...

SOA Suite

Cold Start

Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to...

Lost in Translation

Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I interacted spoke better English than I did, the screens and data for the implementation were all in Hebrew, meaning the Hebrew alphabet.  Over the week I learnt to recognize a few Hebrew words, enough to enable me to test what we were doing.  So I knew SOA Suite worked OK with non-English and non-Latin character sets so I was suspicious recently when a customer was having data corruption of non-Latin characters.  On investigation it turned out that the data received correctly in the SOA Suite, but then it was corrupted after being stored in the database. A little investigation revealed that the customer was using the default database character set, which is “WE8ISO8859P1” which, as the name suggests only supports West European 8-bit characters.  What was happening was that when the customer had installed his SOA repository he had ignored the message that his database was not using AL32UTF as the character. After changing the character set on his database he no longer saw the corruption of non-English character data. So the moral of this story is Always install the SOA Repository in to an AL32UTF8 Database This is true for both SOA Suite 10g and 11g.  Ignore it at your peril, because you never know when you will need to support Hebrew, or Japanese or another multi-byte character set.

Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I...

Waiting to be Served, And Waiting, And Waiting …

Have you ever been sat in a waiting room, say at the doctors, or worse the Dept of Motor Vehicles, and noticed that everyone else is being called up but you?  Turns out that they somehow “lost” you from their queue.  I have been working with a customer recently who has been seeing a large number of one way invocation messages appearing in the recovery list in BPEL 10.1.3.4.  These messages just sit there waiting to be delivered, but never actually get delivered.  The message just sits there wondering why it is being ignored. To understand what is happening here lets look at how messages are delivered for one way invocations. Message Delivery for One Way Invocations In an earlier post I spoke about the threading mode in 10.1.3.4.  In this I explained how one-way invocations that will create new processes are stored in the database and a notification message is placed on an in-memory queue.  The actual message is placed in the INVOKE_MESSAGE table in the orabpel schema.  The notification message placed on the in-memory queue holds a key that can be used to retrieve the message from the INVOKE_MESSAGE table.  The dsp_invoke threads are waiting for messages to appear on this queue and as soon as a message is available one of the dsp invoke threads will remove it from the queue and stat executing the process associated with that message. What Could Possibly Go Wrong? So it all seems pretty straightforward.  Message is stored in database and a notification placed on a queue.  A thread reads the queue and retrieves the message and executes it.  All is well and good except for a couple of things. Server Shutdown If the BPEL server is shutdown, either as a result of a planned outage or due to some system failure then any notification messages left in the queue will be lost because it is an in-memory queue and not persisted.  When the server starts again the messages are sat in the INVOKE_MESSAGE table but there is no notification on the queue so the message just sits there. Instance Rollback Another way to have messages back up in the INVOKE_MESSAGE table is if when they are processed a rollback occurs.  In this case if the transaction rolls back to the initial receive then the message stays in the same state (0 meaning unhandled) in the table because the transaction marking it as removed from the table never commits.  In this case you would normally hope to see a faulted instance.  If the fault is transitory then it may be possible to re-exceute the process but there is no notofication message on the queue to ask for this to occur. Recovering Manually Just like when the secretary loses your name from the queue you need to put it in front of them again, we need to raise a new notification message.  This can be done manually from the BPEL console using the Instance/Recovery tab as shown in the screenshot below.  Select the messages you want to recover and hit the recover button.  This will place a notification message on the queue, requesting that the message be processed. Avoiding Manual Labor You probably don’t want to spend a lot of time recovering messages manually, surely there must be a way to automate this.  Sure enough there is.  In 10.1.3.4 Oracle introduced the auto-recovery console which is configured from the Configuration/Auto-Recovery tab of the BPEL console. This allows you to resubmit notification messages automatically.  It consists of two parts, the startup schedule and the recurring schedule. Dealing with Message Loss Due to Server Downtime The startup schedule deals with our first notification message loss scenario, when notifications are lost due to server shutdown.  In this case at startup the INVOKE_MESSAGE table will be scanned for undelivered messages.  Undelivered messages have notification messages generated for them in batches of maxMessageRaiseSize.  There will be a delay of subsequentTriggerDelay seconds between batches.  This will continue until either all undelviered messages have a notification message placed on the in-memory queue, or until the startupRecoveryDuration time has been exceeded.  To enable this activity to occur all that is required is to set startupRecoveryDuration greater than 0. Dealing with Message Loss Due to Intermittent Rollbacks The recurring schedule deals with a our second notification message loss scenario, when notifications are lost due to a transient error causing a process to rollback.  In this case the INVOKE_MESSAGE table is scanned on a regular basis for undelivered messages. Similar to the startup recovery this scheduled recovery will place up to maxMessageRaiseSize notification messages onto the queue every subsequentTriggerDelay seconds.  Unlike the startup scenario this will occur every day between startWindowTime and stopWindowTime, even if the previous check showed no messages to recover. Note that there is a problem with this in that a message may already have a notification in the queue when the recovery is run, causing the message to be processed twice.  This can be avoided by setting the threshHoldTimeInMinutes to a suitable number of minutes, the default is 10.  This causes the recovery to ignore messages younger than threshHoldTimeInMinutes, giving the BPEL engine time to process the original notification. WE can turn on scheduled recovery by making sure that the stopWindowTime comes after the startWindowTime. Versions The auto-recovery feature first appeared in release 10.1.3.4.  The threshHoldTimeInMinutes property was added in 10.1.3.5 and also in 10.1.3.4 MLR#8. Summary I strongly recommend that you configure the startup schedule auto-recovery feature as it will ensure that all messages get at least once chance to be delivered.  If you suffer from intermittent process rollbacks due to transient errors then you will also benefit from the recurring scheduled auto-recovery feature.  But be careful because you might just keep resubmitting messages that can never be recovered and over time your system will spend more of its time trying to reschedule unprocessable messages than doing real work. So remember that recurring schedule auto-recovery is very powerful, but with great power comes great responsibility.  Something that secretaries in doctors surgeries and at the Dept of Motor Vehicles know and exploit!

Have you ever been sat in a waiting room, say at the doctors, or worse the Dept of Motor Vehicles, and noticed that everyone else is being called up but you?  Turns out that they somehow “lost” you...

Collecting Detailed Logs from BPEL 10.1.3.4

In 10.1.3.4 Oracle added a significant new feature to help diagnose problems in the BPEL Process manager.  This feature allows you to turn up logging for a a test and run thread dumps every ten seconds.  When you are done you can download the logs and thread dumps as a single zip file to either study yourself or upload to Oracle support.  In this entry we will look at how to use this feature.   Starting Data Collection If you log on to the BPEL console and go to the Administration tab you will see a Diagnostics sub-tab.  Select this tab and you will see a screen with two buttons allowing you to start and stop collection of detailed logs and thread dumps. Pressing “Start Collection” will, as the screen says set all the loggers to the debug level and also start performing a thread dump every 10 seconds.  Note that this will also reset the statistics collection. Having started your data collection you have up to five minutes to run your tests before the diagnostics collection will stop automatically.  So be prepared to execute your tests immediately! Stopping Data Collection After running your tests then return to the Diagnostics tab under Administration and press the “Stop Collection” button.  When collection is stopped then the loggers are reset to the values you had before pressing the “Start Collection” button.  It will also reset the statistics collection, again (it says that on the screen but when I looked on base 10.1.3.4 it hadn’t reset them).  Finally it will download a file called log.zip that will have the data collected during your test run.  Save this file to your hard disk.  If you forget to stop the data collection then it will stop automatically after 5 minutes, but it won’t give you an option to download the results. What You Get for your Money So what do you get in the download?  The following file structure will appear in your zip file. logs – this is the logs directory of your BPEL domain $ORACLE_HOME/bpel/domains/<domain_name>/logs domain.log – The current domain log domain.log.N – previous domain logs dispatcher.xml – this contains information about message processing in the BPEL engine, particularly useful for information about thread usage. stats.xml – This cotnains an XML representation of the statistics in the BPEL Consoles Administration/Statistics tab, it also includes some very basic JVM stats. thread-dump-NN.txt – the thread dump files, one for every ten seconds that the data collection was occurring. The domain logs don’t cover just the time you were running with enhanced logging, but include all the domain logs available up to the point of download of the zip.  To find the start of the enhanced logging section you can search for <CubeLogCollector::startCollection> which marks the start of enhanced logging. What Can I Do With All This Stuff All the data apart from the threads that is collected can be more easily viewed through the BPEL console.  But if you need to upload information to Oracle support to help them diagnose issues then this is a great tool to use.  The thread dumps can also be useful if you have your own Java code executing and you want to see why and where it is blocking. Summary The diagnostics tab is a useful tool for collecting information for upload to Oracle support. to help diagnose problems with the core BPEL engine.

In 10.1.3.4 Oracle added a significant new feature to help diagnose problems in the BPEL Process manager.  This feature allows you to turn up logging for a a test and run thread dumps every ten...

Threading in 10.1.3.4

Seems a long time since I wrote anything, between getting sick with Mono (Glandular fever to fellow Brits), Christmas, New Year, and another large project I have been slow to write up anything. Seems I have been working with a number of customers recently who have problems with or are concerned about BPEL Threading.  So in this entry I will give an overview of how threads are allocated by the BPEL engine. Overview Threads in BPEL are divided into thread pools.  This a feature of the JVM that is also used in most Java systems.  Typically a thread pool is dedicated to a particular task such as processing invocation messages or running JMS adapters. The first question to answer when talking about threading in BPEL is about the source of the activity in the BPEL process.  Different thread pools are used depending on the source of the activity.  Some of these pools are BPEL thread pools, others are application server pools used by BPEL. Request/Reply Invocations If the interface to your BPEL process is a synchronous request/reply interface then you will “steal” the thread from the source of the call.  A synchronous request/reply interface is characterized in WSDL by an operation with an <input> and an <output> message.  In your BPEL process this translates to a <receive> and a <reply> for the partner link. Common sources of threads for request/reply activites include Application server servlet thread pool for HTTP web service invocations. Application server servlet thread pool for invocations from JSP pages and servlets through the Java API. EJB thread pool for remote invocations using Java API. Adapter thread pool for invocations from adapters. Activities that Use Additional Threads Normally a request/reply interaction will all execute in a single thread.  However if there are any activities that would cause the BPEL process to be suspended or the client transaction to be suspended then another thread would be used. If the BPEL process must pause its execution to wait for a long running activity then the original request reply thread will block waiting for the reply to be reached.  This allows the user transaction to be preserved.  When the reply activity is reached then the invoking thread is notified and it can return the result to the client.  Any activities that occur after the invoker thread is suspended will execute in an engine thread as they are not part of the original invoke. One-Way and Asynchronous Invocations If the interface to your BPEL process is a one-way or an asynchronous request/reply interface then your BPEL process will receive its request through a message placed on a queue in the context of the caller thread.    The same sources for request/reply activities also provide the one way interactions.  A one-way interaction is characterized in WSDL by an operation with only an <input> message.  In your BPEL process this translates to a <receive> without a <reply> for the partner link.  An asynchronous request/reply interface is characterized in WSDL by two port types each with an operation with only an <input> message.  The port types provide two different roles.  In your BPEL process this translates to a <receive> and an <invoke> for the partner link. The actual message is stored in the invocation tables in the BPEL schema and a short notification message referring to the message in the table is placed on an in-memory JMS queue.  The invoker threads from the invoker thread pool are waiting to receive notifications from this queue.  The number of threads a given domain has waiting to read these asynchronous messages is controlled by the dspInvokeThreads property. If the invoker thread hits an activity that would cause the BPEL process to pause then it will stop working on that process instance and return to waiting on the queue for a new notification message. Activities that Pause the BPEL Process The following activities cause a BPEL process instance to be suspended and its state stored in the dehydration store, they also cause the current thread to stop working on this process instance: <receive> – any receives other than the one that initiated the process will potentially cause the BPEL engine to wait for a message to arrive. <wait> – any waits greater than a small value will be implemented through the quartz timers and the BPEL process will wait for the alarm to trigger before continuing. <pick> – waits for one of a number of potential messages to arrive or a timeout to expire.  The net result is again that the BPEL process will wait for either the first message to arrive or the timeout to occur. <invoke> with nonBlockingInvoke – nonBlockingInvoke allows the thread to find other activities to perform without waiting for the invoke to complete.  This is commonly used in a flow statement to allows concurrent execution of synchronous request/reply messages.  Under the covers this creates an  internal receive to wait for the result of the non blocking invoke to complete.  Of course a receive is one of our activities that causes thread execution to change.  The actual invoke will execute in a separate thread and when it completes it will notify the BPEL process that it has completed through the internal receive.  Hence it is useful to think of the non blocking invoke as an invoke followed by a receive. Resuming a Paused BPEL Process A paused BPEL process instance will be woken either by a message arriving or a timer expiring.  Messages for paused processes arrive in the same way as any other message, as explained in the request/reply section.  In this case the delivering thread places the message in the dispatch table rather than the invoke table and posts a notification to a queue.  A different thread pool to the invocation pool is used to consume these messages.  The engine threads are listening to this queue and one of them will receive the message and resume executing the process.  The number of threads in the pool is controlled by the dspEngineThreads parameter. The engine thread will continue to execute the process until it reaches another activity that causes the process to be suspended at which point the process state will again be stored in the dehydration store and the thread will return to waiting on the queue. Additional Thread Pools In addition to the servlet thread pools, each adapter has its own thread pool which can be tuned in the adapter configuration files.  Typically the adapter thread pools sit blocked and waiting for incoming messages, either waking up and polling for database changes or new files, or blocking and waiting on a queue.  There is another BPEL thread pool called the system thread pool controlled by dspSystemThreads.  This thread pool us used to perform system tasks, and generally doesn’t have too much to do. Domains, Clusters and Database Connections The discussion above explains how different threads are used by the BPEL engine.  It is worth thinking for a moment about what this means the environment in which the BPEL processes execute. Domains Each domain has its own dspInvoke and dspEngine thread pools.  So if you have multiple domains you could end up with a lot of threads in the thread pool.  More threads does not mean that the system will run faster.  Generally if the threads in the pool are active it means that they are consuming CPU.  Threads executing a BPEL process instance will always consume CPU unless they are blocked in request/reply interaction with another system.  This means that if you allow too many threads you may see a lot of context switching occurring between threads.  This can result in lower system throughput than could be achieved with a smaller number of threads.  Tuning the thread pools is best done empirically based on the system utilization expected in production. The total number of threads used will be the sum of the dspInvokeThreads, the dspEngineThreads and the dspSystemThreads across all the domains in the BPEL server.  All these threads exist in a single JVM, the container running the BPEL process manager. Clusters In a BPEL cluster the same number of threads is used on each node of the cluster, as configured in the domain.xml file for each domain.  It is worth remembering that adapters that are active on all nodes in the cluster will be using resources on every machine and this should be factored in to number of resources available in the target systems. Database Connections Each thread executing a BPEL process instance, whether a dspInvokeThread, a dspEngineThread, a servlet thread or whatever pool it may come from will require at least a database connection to the dehydration store.  This requires that the database connection pool in the application server must at least as big as the sum of the thread pools across all the BPEL domain.  In addition it must allow for request/.reply interactions from EJBs and servlets.  by default this connection pool is unlimited in OC4J.  Even if the database connection pool is unlimited on the application server the database itself does not supported an unlimited number of sessions.  The database sessions parameter must be sized at least big enough for all the threads on all the domains on all the nodes to have a connection to the dehydration store.  Even for a small cluster this can be hundreds or even thousands of potential sessions. Summary So the short summary is that request/reply messages execute on the thread in which the message arrived, asynchronous messages are delivered using the thread on which they arrived but are processed with one of thwo thread pools.  New process instances start executing using the dspInvokeThreads pool, existing processes are resumed using threads from the dspEngineThreads pool.  Of course there are exceptions to what we have spoken about, but basically although properties may change the behavior of threading it all boild down to either being treated as a request/reply message using the arriving thread, or an async message using either the invoke thread pool or the engine thread pool as appropriate. Hopefully this clarified how threads are used in BPEL 10.1.3.4.

Seems a long time since I wrote anything, between getting sick with Mono (Glandular fever to fellow Brits), Christmas, New Year, and another large project I have been slow to write up anything. Seems I...

SOA Suite

Calling EJB 3 from BPEL 10.1.3

Despite a number of useful blog entries out there it seems that calling EJB 3 from BPEL is still stumping people so thought I would go through the steps.  Note that these are much easier than in earlier releases of EJB and BPEL. Create an EJB 3 Session Bean First thing I did was create a simple EJB 3 session bean that had two methods hello - which uses just simple String input and output. swap - which uses a custom class as input and output. I used JDeveloper to create a simple EJB 3 session bean and accepted all the defaults. Here is the code for the bean class package testejb; import javax.ejb.Stateless; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import oracle.webservices.annotations.WSIFEJBBinding; import javax.jws.soap.SOAPBinding.Style; @Stateless(name="MySessionEJB") public class MySessionEJBBean implements MySessionEJB, MySessionEJBLocal {     public MySessionEJBBean() {     }     public String hello(String name) {         return "Hello "+name;     }     public MyComplexType swap(MyComplexType input) {         MyComplexType retval = new MyComplexType();         retval.data1 = input.data2;         retval.data2 = input.data1;         return retval;     } } The MyComplexType class is shown below package testejb; import java.io.Serializable; public class MyComplexType implements Serializable {     public String data1;     public String data2; }; The swap and hello methods were marked as available in the local and remote interfaces. I then created a deployment descriptor and deployed the EJB to an OC4J container. Testing EJB 3 Session Bean I then used JDeveloper to generate a simple test client to ensure that the bean remote interface was working.  The code for that is shown below: package testejb; import java.util.Hashtable; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; public class MySessionEJBClient {     public static void main(String [] args) {         try {             final Context context = getInitialContext();             MySessionEJB mySessionEJB = (MySessionEJB)context.lookup("MySessionEJB");             // Call any of the Remote methods below to access the EJB             // mySessionEJB.hello(  name );              System.out.println(mySessionEJB.hello("Antony"));             // mySessionEJB.swap(  input );             MyComplexType req = new MySessionEJBBean.MyComplexType();             MyComplexType resp;             req.data1 = "d1";             req.data2 = "d2";             resp = mySessionEJB.swap(req);             System.out.println("data1="+resp.data1);             System.out.println("data2="+resp.data2);         } catch (Exception ex) {             ex.printStackTrace();         }     }     private static Context getInitialContext() throws NamingException {         Hashtable env = new Hashtable();         // Oracle Application Server 10g connection details         env.put( Context.INITIAL_CONTEXT_FACTORY, "oracle.j2ee.rmi.RMIInitialContextFactory" );         env.put( Context.SECURITY_PRINCIPAL, "oc4jadmin" );         env.put( Context.SECURITY_CREDENTIALS, "welcome1" );         env.put(Context.PROVIDER_URL, "opmn:ormi://w2k3:6003:home/MySessionEJBDeploymentProfile");         return new InitialContext( env );     } } Once I had verified that I had a working EJB the next step was to create a WSDL for the EJB Creating WSDL for EJB 3 The easiest way to create a WSDL is to get JDev and OC4J to do most of the work.  To do this modify the Bean class to add Web Service annotations as shown below import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import oracle.webservices.annotations.WSIFEJBBinding; import javax.jws.soap.SOAPBinding.Style; ... @Stateless(name="MySessionEJB") @WebService @WSIFEJBBinding @SOAPBinding(style=Style.RPC) public class MySessionEJBBean implements MySessionEJB, MySessionEJBLocal { The @WebService identifies that this should be a web service, the @WSIFEJBBinding says that we want to support an EJB binding and the @SOAPBinding(style=Style.RPC) says that the style of web service should be an RPC form.  The RPC style is needed for the EJB binding to work properly. Having updated the annotations we can then re-deploy the EJB and access the newly created WSDL through the Web Service Test facility of Enterprise Manager.  This WSDL can be downloaded and used as the basis of the WSDL for access to the EJB. Modifying the WSDL for use in BPEL Before we use the WSDL we need to modify it as follows. First we remove the SOAP port by commenting it out as shown below: <!--     <port name="MySessionEJB" binding="tns:MySessionEJBBeanSoapHttp">         <soap:address location="http://w2k3/MySessionEJBDeploymentProfile/MySessionEJB"/>     </port> --> We then enrich the EJB port by adding additional information about the EJB.  The initial EJB port is shown below: <port name="WsifEjb" binding="tns:WsifEjbBinding">     <ejb:address initialContextFactory="com.evermind.server.rmi.RMIInitialContextFactory"/> </port> Unfortunately this does not say where the service is located so we need to enrich this information by adding a jndiName field which is the name of the EJB and a jndiProviderURL which is the endpoint location of the naming context.  Both these values can be taken from the test client (the lookup() parameter and the jndi.PROVIDER_URL property value).  This gives us the following service element: <port name="WsifEjb" binding="tns:WsifEjbBinding">     <ejb:address initialContextFactory="com.evermind.server.rmi.RMIInitialContextFactory"                  jndiName="MySessionEJB"                  jndiProviderURL="opmn:ormi://10.148.55.149:6003:home/MySessionEJBDeploymentProfile"/> </port> We can now import this WSDL into our BPEL process and use it as any other partner link. Security Concern When using the EJB WSDL we need to provide a username and password for the JNDI lookup.  This is done by adding the following properties to the partner link: java.naming.security.principal java.naming.security.credentials and setting them to appropriate values such as oc4jadmin and the corresponding password. Classes In order for BPEL to find the Java classes they must be on the classpath.  Move the required class files generated by the EJB into the $ORACLE_HOME/bpel/system/classes directory and restart the OC4J container. Gotchas There are a number of things that can go wrong. If you get a name not found exception then it indicates that the port properties are not properly set up or the username and password are wrong. If you get an unable to find method error it is probably because you failed to use RPC style, and instead used DOC style web services. If you get a class not found then you have probably forgotten to deploy the classes. Sample Code I have uploaded the following sample application which consists of an EJB project with client code and an BPEL project. Sample Application EJBTest.zip Summary In this blog entry we have looked at how to create a WSIF binding to invoke an EJB 3 Stateless Session Bean.  If you don't have the source to the bean then you can always create an equivalent interface and follow the steps in here to generate the WSDL.  Hopefully this has made using EJB 3 from BPEL a little easier.

Despite a number of useful blog entries out there it seems that calling EJB 3 from BPEL is still stumping people so thought I would go through the steps.  Note that these are much easier than...

SOA Suite

Obtaining WSDL from a Deployed 10g BPEL Process

We always talk about the virtues of loose coupling with SOA, and the service interface is a key component of this.  Often we need to extract the service interface from a deployed BPEL process in order to call the process, or make use some of the some of the same services that the BPEL process calls.  When we deploy a BPEL process both the WSDLs implemented by the process and the WSDLs invoked by the service are all available through the BPEL console. Navigating to your Process Obtaining the WSDLs of a process is very easy.  Log in to BPEL console and select the processes tab. Then select the desired process. This will display the process details. Obtaining the WSDL Implemented by Our Process To obtain the WSDL interface to the process click on the WSDL tab.  Ensure that you have the correct version selected. This will bring up the WSDL associated with the partner link implemented by this process. Obtaining the WSDLs Called by Our Process To obtain the WSDLs called by the process then go to the descriptor tab. This provides a list of all the partner links in the BPEL process, including the partner links implemented by the BPEL process. From here we can select the WSDL that is needed.   We can also see the properties associated with the partner link. Note that often the BPEL designer will have created a wrapper WSDL that adds partner link information that is required by BPEL.  If this is the case, and it will be the case for most external WSDL documents, then it is necessary to examine the wrapper WSDL and extract the location attribute of the WSDL import statement. This can then be plugged into URL of the wrapper WSDL to provide the actual service WSDL.  Usuall this can also be obtained by removing the “Ref” suffice from the partner link WSDL. This will provide the actual WSDL that is being used by our BPEL process, rather than the wrapper that references it. Why Bother? Why go to all this trouble, what is the value in obtaining the WSDLs used by a BPEL process? Well there are several reasons.  A few are outlined below There may be difference in behavior between different environments and we wish to confirm that they are using the same WSDL definitions. We may need to create a test environment that requires us to emulate WSDL services provided. We may want to check endpoint details to ensure that we are able to navigate through firewalls in our environment. We may not have immediate access to the BPEL project and want to check some WSDL interface settings. We may just be nosy! Download the Whole BPEL Process It is also possible to download the whole BPEL suitcase (packaged process) by selecting the Manage tab of the process we want to download under the Processes tab. At the bottom of this screen we can download the suitcase by clicking the Export Process button.  This downloads the suitcase as a zip file. Summary Often we are looking at deployed processes without easy access to the original JDeveloper project.  Through the BPEL console we can verify both individual WSDL interfaces and also download the whole BPEL source, making it easy to check what is happening.  So remember, next time you are faced with odd behavior and want to look at the BPEL source or partner link details, you don’t need to call the developer!

We always talk about the virtues of loose coupling with SOA, and the service interface is a key component of this.  Often we need to extract the service interface from a deployed BPEL process in order...

SOA Suite

Software Required for Test 11g SOA Cluster

In my last entry I spoke about some of the gotchas that are involved in setting up a cluster.  Over the next few entries I am going to describe how to build a SOA Suite 11g cluster for use in a test environment.  In this entry we will look at the target architecture and the required software. Target Architecture I am going to build my 11g cluster on 3 machines. Machine DB will host an 11gR1 database.  I will also use it to host a software load balancer (I will use WebCache). Machine SOA1 will host two WebLogic installations.  A WebLogic 11g installation will have a single SOA domain hosting a SOA Suite cluster, including BAM.  A WebLogic 10.3 installation will have a single OSB domain hosting an OSB cluster. Machine SOA2 will have the same software as SOA1 and will host the same two domains. When OSB 11g is released then the need for two separate WebLogic installations will go away as the intention is for OSB and the rest of SOA Suite to run on the same WebLogic software version. As there are two WebLogic domains then I will run an admin server on each machine, one domains admin server on SOA1 and the other domains admin server on SOA2.  This helps reduce the memory footprint. Logically the architecture is shown below with a load balancer distributing load across the SOA and OSB clusters with a backend 11g database.   As for testing I don’t have a hardware load balancer then I run the load balancer on the same machine as the database to give the physical architecture shown below. I will run this on 3 virtual machines on a server with 8gb memory, allowing 2gb for each virtual machine. As a large number of customers seem to be running Linux these days I will use Oracle Enterprise Linux 5.3.  I will use 64-bit Linux for the DB machine and 32-bit Linux for the SOA machines. It is quite common for clusters to be using a RAC database rather than a single instance database, but that was one VM too many for me to get my head around. Software Required So now we have identified the logical and physical architecture we need to identify what software we will require.  The software used is all available for download from OTN by clicking on the software link as shown in the table below.  Our target machine for the software is also shown in the table. Software Purpose Target Notes Oracle WebLogic Server 11g Rel 1 Required for SOA Suite SOA1, SOA2   SOA Suite Core SOA Suite SOA1, SOA2   Oracle Service Bus 10gR3 Service Bus SOA1, SOA2 11g release will be available shortly. Repository Creation Utility Creates Meta-Data repository for SOA Suite DB May be run from any machine with network access to database. Oracle Database 11g Release 1 Holds Meta-Data repository for SOA Suite DB Any database certified with SOA Suite may be used. Web Tier Utilities Contains Web Cache for use as a load balancer DB Another load balancer may used. Enterprise Linux Operating System SOA1, SOA2, DB Any OS certified with database or SOA Suite may be used.  DB machine may be a different OS to SOA machines.   Load Balancing There are a number of software load balancers available, including functionality built into Linux so why did I use WebCache.  Well there are a number of reasons. I like WebCache It has a nice web based UI for configuring and monitoring It supports cookie based affinity (see previous post for importance of this) It does the job Just be careful when using WebCache with SOA Suite that you do not use it to cache data.  To my knowledge no testing has been done within Oracle with using WebCache in conjunction with SOA Suite 11g so don’t deploy it in a production environment. I have to confess that the idea of using WebCache as a load balancer was not mine, but my colleague Nick Cosmidis, so thanks Nick. Other Resources Setting up a cluster requires shared storage, ideally for the domain home but also for shared resources such as JMS message stores.  I could have used an iSCSI appliance to provide this but I chose instead to use the DB machine as a shared file server for the mid-tier components. The cluster also requires IP addresses.  Obvious but there are different requirements for those IP addresses.  The IP address for the load balancer must be routable from all the clients of the cluster.  The database, SOA Suite and OSB instances can have non-routable IP addresses as long as they can talk to each other and the the load balancer.  The clients don’t have to be able to access the database, SOA Suite or OSB directly because they will go through the load balancer. Virtualization I am running this on a virtualized environment, a single 8GB machine hosting all three machines.  The only software virtualization fully supported by Oracle is Oracle Virtual Machine.  That is not to say it won’t work on other software virtualization environments such as VMware, just that it is not fully supported on those environments.  For more information on Oracle’s support policy with respect to virtualization in general check out this link.  For specific information on VMware support then check Note 249212.1 in MetaLink.

In my last entry I spoke about some of the gotchas that are involved in setting up a cluster.  Over the next few entries I am going to describe how to build a SOA Suite 11g cluster for use in a test...

SOA Suite

What I learnt About Clustering

Since moving to support I have learned a lot about clustering.  Some of the things I have learnt are; Lots of customers are running SOA Suite clusters Lots of them haven't read the High Availability Guide (10g or 11g) Lots of them haven't read the Enterprise Deployment Guide or EDG (10g or 11g) Many of them have problems because of the points above Part of the problem for many customers is that setting up a cluster has a lot of steps and a few gotchas that can come back to bite you.  Just got off the phone with a customer who was having problems with a cluster install, nothing too serious but irritating and slowing him down.  Unless the HA & EDG are followed very carefully it is easy to make mistakes.  A few common problem areas I have seen are Failing to separate the design and run time of the ESB The ESB design time is a singleton and there must never be more than one instance of the ESB design time active against the same repository at the same time. Poor configuration of JGroups or Coherence In 10g JGroups is used to identify cluster membership, in 11g Coherence plays the same role.  If these are not configured consistently across the cluster then one part of the cluster may be unaware of the existence of other parts of the cluster with dire circumstances for some shared resources. Failure to set up virtual addresses correctly The cluster should have a single virtual address. This virtual address needs to be configured in the HTTP listener, into the OC4J servlet engine and into the BPEL URL settings, amongst other places.  Failure to do this can lead to odd behavior. Failure to test on a cluster Seems many companies have clusters in production but not in test and dev.  Surely no-one is that stupid you say.  Well they may configure test instances with a cluster but for resource reasons run only one node of the cluster, after all a one node cluster is still a cluster right...  They then wonder why they only get certain problems in production and can't reproduce them in test. Many customers fail to have suitable test load balancers Often customer will not have a hardware load balancer for use in their test or dev environments and will rely on a software load balancer.  Some of these software load balancers use IP stickiness to keep affinity between clients and servers.  This is bad when testing because it means that if you run a test script from a single machine it will only target a single machine in the cluster....  Make sure when testing with a software load balancer that it uses HTTP cookie affinity rather than IP address affinity. Poor testing with BPEL drivers Often we use BPEL processes as test harnesses to exercise functionality.  This is good but it doesn't work well in a cluster without some modifications to the driver process.  By default BPEL will optimize communications, within the same JVM it will use Java calls, between JVMs in the same cluster it will use ORMI, it only uses HTTP if it has to.  To test properly we need to distribute load through the load balancer and hence want to use HTTP.  In the invokes we can use the property optSoapShortcut=false to force calls to go through HTTP and hence the load balancer. The above isn't an exhaustive list.  If you have others then feel free to share them, I would love to add more to the list.

Since moving to support I have learned a lot about clustering.  Some of the things I have learnt are; Lots of customers are running SOA Suite clusters Lots of them haven't read the High Availability...

Miscellaneous

SOA at the Top of the UK

As part of the three peaks challenge which I completed this week the Oracle team were challenged to get a picture of someone reading Matt and my book - the SOA Suite Developers Guide -on top of each peak.  Thought I would share the story and the pictures with you. The objective of the three peaks challenge is to walk up the highest peak in each of England, Scotland and Wales in a single 24 hour period. Peak 1 - Ben Nevis Ben Nevis in Scotland is the highest peak of the three at 1344m.  We started our ascent at 5pm on Friday and reached the summit by 7:30pm.  As can be seen on the picture of myself and Neil Spink reading the SOA Suite book, weather at the top was clear so that we could see for miles.  The weather was hot and sunny and I struggled a little on the way down but we were all off the mountain by 9:45pm and on the road to Scafell Pike in the Lake District, not far from my brother-in-laws house.  We left for Scafell at 9:51pm.     Peak 2 - Scafell Pike Scafell Pike in England is the smallest of the three peaks at 978m.  We arrived at 3:32am, just as the sky was lightening up in the east.  The moon was also very bright with little cloud cover so we had no problem seeing our route and we reached the summit by 5:30am just as the sun was rising above the surrounding peaks of the Lake District.  There was little cloud about and Daniel Roberts and myself could easily see the book at the top.  We were all back down the mountain by 7:17pm and on the road to Snowdon.  Our drivers might have been controlling 17 seater mini-buses, but to the occupants we felt like we were participating in a particularly tough rally as we wound our way out through the tortuous lanes on our way to North Wales.   Peak 3 - Snowdon The final peak, Snowdon, is the tallest mountain in England and Wales at 1,085m and was challenging in two ways.  When we arrived at 11:50am we had already spent more than 8 hours on mountains and more than 11 hours crammed into what seemed to be a shrinking mini-bus, so we were not at our most energetic.  Also at Snowdon the weather was failing and it was beginning to rain.  However arrival at Snowdon energised us and within a minute of arrival we were striking out towards the mountain.  The initial approach up the Pyg Track was fairly easy going, with just a scramble at the end to reach the summit by 1:45pm.  The summit was cloudy and wet so no-one wanted to hang around for long, especially myself and Andy Gale who found the book rather damp going.  The descent was also very quick, using the Miners Track to drop down off the mountain very rapidly, followed by what felt like a long run/walk around the mountain back to the start point.  We had all arrived back by 4:02pm, meaning that as a group we completed the challenge in 23 hours and 2 minutes. Thanks Special thanks must go to our drivers, Colin, James, Martin and Jackie who got us between peaks rapidly and safely.  I was very appreciative of Andy Gale who made sure I kept pace on Scafell Pike and Snowdon.  Finally the determination award must go to Neil Spink who despite blisters and damaged knees climbed Snowdon and completed the challenge. Sponsorship Apart from wanting to show that we were not all over the hill (most of us were well over 40!) we also wanted to raise money for the NSPCC.  We are still a few hundred pounds shy of raising £10,000 for this charity so please feel free to go to my justgiving page and sponsor me. Statistics The following is from a GPS tracker we took with us. Ben Nevis - start at 51m, Top 1,347m  (include the trig point I guess), distance 15.1km or 9.38 miles Scafell Pike - start at 29m, Top  985m, distance 9 km or.5.59 miles Snowdon - Lowest Point 376m, Top 1,088m , distance 12.4 KM or 7.7miles Total Climb = 1296 + 956 + 712 = 2,964m or 9,724 ft Total Distance = 22.67miles

As part of the three peaks challenge which I completed this week the Oracle team were challenged to get a picture of someone reading Matt and my book - the SOA Suite Developers Guide -on top of...

SOA Suite

Clustering SOA Suite

Building a SOA Suite Cluster Having spent a couple of weeks working on a SOA Suite cluster thought I would share some thoughts around clustering and SOA Suite.  Clustering of both BPEL Process Manager and Oracle Service Bus is relatively straightforward but there are a few gotchas.  Both BPEL and SOA Suite are stateless in the way they implement clustering, however BPEL does of course persist state to a database. SOA Suite Clusters Both BPEL and OSB clusters expect to be fronted by a load balancer.  Both can provide load balancing through a front end web server but a hardware load balancer is the best approach as shown in the diagram. In this example we have an Oracle Real Application Cluster database running on 3-nodes to provide a high availability database environment.  We then have two clusters.  A cluster of 5 BPEL Process Manager instances, all pointing to the same RAC database, and a cluster of 5 OSB instances.  The BPEL cluster and the OSB cluster are both fronted by two hardware load balancers. The BPEL instances and the OSB instances are in an active-active mode, meaning that all nodes are processing requests at the same time.  The load balancers are in active-passive mode, meaning that one load balancer processes all the traffic with the other load balancer acting as a hot standby in case of failure of the first load balancer. This configuration avoids a single point of failure as every component is duplicated.  The system has been sized to be able to sustain the expected load even in the event of losing a machine.  This is known as "n+1" architecture, meaning that we need "n" machines to meet the requirements and so we provide "n+1" machines to allow for machine failure.  In the example shown we actually have two machines more than we need in normal operation because the the RAC database is running an "n+1" configuration as well as the SOA Suite.  Note that by running OSB on the same machines as BPEL we reduce the amount of extra hardware needed for failover and also reduce the latency of OSB-BPEL communication. In the event of a failure then only in-process requests would be impacted.  Any requests that are "idempotent" (meaning they can be resubmitted with no ill effects) can be set up to automatically retry, further reducing the impact of a software or hardware failure.  Both BPEL and OSB can be set to automatically retry requests in event of failure, making the failover transparent. Note that clustering does not help if we have a site failure due to fire, flooding, power failure or air conditioning failure for example.  In those cases we would need to have some sort of disaster recovery site, perhaps using Oracle Data Guard to keep the sites in synch at the database level. BPEL Clusters A BPEL cluster is effectively defined by a shared dehydration store.  Synchronous interactions must be processed within a single BPEL server instance, as the client has connected to a socket and expects a response on that same socket.  Asynchronous interactions are like any other long running BPEL process and may start processing on one node and then have processing resumed on another node, either due to failure or some other event.  The dehydration store (an oracle RAC database in the example above) provides a common location for process state that allows any BPEL instance to resume execution of a process instance. When installing a BPEL cluster the best place to start is the High Availability Guide.  This outlines that you create a BPEL cluster by doing the following: Get the address of the load balancer. Run the repository creation assistant to create the BPEL meta-data in the database. Install OHS and OC4J components on the machines. Install BPEL Process Manager into the app server instances installed previously. If using a RAC database make sure that the JNDI data sources are using all RAC nodes.  See the Enterprise Deployment Guide for instructions on using Fast Connection Failover. Configure BPEL in a cluster as outlined in the Enterprise Deployment Guide. Set up JGroups to make all nodes aware of each other in the cluster.  Note that in 11g Coherence will be used for this which will simplify configuration. Set enableCluster and ClusterName in the collaxa-config.xml file. Make sure the BPEL PM instances all use the load balancer address for server URL and callback URLs.  This ensures that in event of node failure requests and responses are rerouted to remaining instances. Deploy processes on each node to make sure that all components are available on all nodes.  If you don't do this some processes will work because they don't have dependencies on non-BPEL components, but others will not. Once set up use the BPEL fault handling framework to make sure that any calls to OSB are automatically retried in event of failure. OSB Clusters An OSB domain may have a single OSB cluster.  This cluster is installed like any other cluster in WebLogic.  Details on configuring the cluster can be found in the Creating WebLogic Domains Using the Configuration Wizard documentation.  For normal operation the OSB cluster is completely stateless, however the metrics gathering and aggregation takes place in a singleton service that by default is assigned to the first machine created in the cluster.  If this server fails then metrics will stay in the message queues to which they are delivered until a new instance starts. Get address of the load balancer. Install OSB software onto each machine or into a single shared location.  If a shared location make sure that the reference to it is the same on each machine. Create an OSB domain Creating a cluster and machines and assign servers to machines as explained in the Creating WebLogic Domains Using the Configuration Wizard documentation. Use load balancer address for cluster address. If you are not using a shared file location for your OSB install then you need to copy the contents of the osb domain directory to all nodes.  This ensure that the correct scripts are available for the node manager to launch managed servers. Run node manager on each machine. You can now launch your admin server and start the managed servers one at a time.  It is recommended that you start the OSB server running the data collectors first.  This will avoid timeouts on the other machines in the cluster and ensure that metrics are available. If message queues are required to be highly available then they should use persistent storage either on a shared highly available disk (a SAN for example) or they should use database persistence. If you are using an Oracle database then you should configure your OSB domain to store metrics in the Oracle database rather than in pointbase as explained in Creating WebLogic Domains Using the Configuration Wizard. When calling BPEL or external web services from OSB make sure that you specify retries to allow for node failure.  Intra-OSB calls should be done using "local" transport for efficiency. Summary Configuring a cluster is a bit more involved than configuring a single instance, but it is not massively more complicated and it does provide both scalability and high availability.  Both BPEL and OSB scale linearly with increased nodes.  The only limitation on BPEL is the load on the backend dehydration store.  So go ahead, enjoy a cluster!

Building a SOA Suite Cluster Having spent a couple of weeks working on a SOA Suite cluster thought I would share some thoughts around clustering and SOA Suite.  Clustering of both BPEL Process Manager...

SOA Suite

Using 11g Database with SOA Suite 10.1.3

Installing SOA Suite 10.1.3 with an 11g Database Just at a customer who has an 11g RAC database that he wants to use for his SOA repository.  If you try and install SOA Suite into an 11g database, it tells you that the database is not supported and the irca configuration assistant fails to find a java library.  11g is a certified platform for SOA Suite 10.1.3.4 so here is how to get it installed. IRCA Before installing the SOA Suite executables you need to run the irca script to create the SOA Suite schemas in the 11g database.  The irca script needs to be able to find an ojdbc14.jar file.  This file is not shipped with the 11g database which provides libraries for Java 5 and Java 6 rather than the almost obsolete Java 1.4.  This leaves you with a couple of options; Use an Oracle 10g home as your Oracle home, if you have one on the machine, when running irca. Copy the JDBC libraries from an Oracle 10g home (<Oracle10g_Home>/jdbc/lib) to the Oracle 11g jdbc location (<Oracle11g_Home>/jdbc/lib). Having done this then the irca script should run fine and create the ORABPEL, ORESB and ORAWSM schemas for you in an 11g database.  You are now ready to run the SOA Suite installer. Installer When you run the installer to create the SOA Suite instance it will fail when checking for the existence of the SOA Suite schemas unless you patch the installer files first.  The way to do this is to download patch 6265268 from MetaLink and follow the instructions which basically require you to modify the install media as follows: Replace DBConnectQueries.jar Move <MEDIA>/stage/Queries/DBConnectQueries/8.4/1/DBConnectQueries.jar to <MEDIA>/stage/Queries/DBConnectQueries/8.4/1/DBConnectQueries.jar.pre6265268 Copy <PATCH>/DBConnectQueries.jar from the patch to <MEDIA>/stage/Queries/DBConnectQueries/8.4/1/DBConnectQueries.jar Note that the current patch documentation incorrectly refers to an 8.5 directory rather than the 8.4 that actually exists. Replace IP_DBQueries.jar Move <MEDIA>/stage/Queries/IP_DBQueries/3.0/1/IP_DBQueries.jar to <MEDIA>/stage/Queries/IP_DBQueries/3.0/1/IP_DBQueries.jar.pre6265268 Copy <PATCH>/IP_DBQueries.jar from the patch to <MEDIA>/stage/Queries/IP_DBQueries/3.0/1/IP_DBQueries.jar You can now launch the installer and there will be no complaints about an 11g database.  Note that if you are installing a SOA Suite cluster then this will need to be done for each SOA Suite instance being installed. What’s the Point? As you are making these changes your mind keeps asking why you are doing this.  Apart from being told to do so by your DBAs there are some good reasons for using 11g.  The 11g database is the most manageable Oracle database ever, and several options only work on the 11g database.  11gR1 has been out for a long time now and so from a longevity perspective its best to put new deployments on an 11gR1 platform even if you have a release-1 strategy to avoid “bleeding edge” deployments. Finally as you can see, it is not that hard to use an 11g database with SOA Suite.

Installing SOA Suite 10.1.3 with an 11g Database Just at a customer who has an 11g RAC database that he wants to use for his SOA repository.  If you try and install SOA Suite into an 11g database, it...

SOA Suite

Using Oracle Enterprise Linux with SOA Suite 10.1.3

I have just been with a customer who was using Oracle Enterprise Linux 5.  Now this shouldn’t be any different from other Linux installations except for one minor problem, the SOA Suite installer insists on checking that the Linux flavor is one explicitly supported by SOA Suite.  Well OEL wasn’t in the list when SOA Suite 10.1.3.1 came out and so the installer fails on the pre-requisite checks and won’t go any further, even if you applied all the required patches to the OS.  Fortunately there is a patch 6339508 which provides replacement pre-requisite tests.  To use it you need to add two parameters to the runInstaller command as shown below. ./runInstaller PREREQ_CONFIG_LOCATION=<PATCH_LOC>/prereq –paramFile <PATCH_LOC>/oraparam.ini PATCH_LOC is the location where you unzipped the 6339508 patch.  With these new parameters the installer correctly recognizes Oracle Enterprise Linux as a supported platform.  Note that OEL needs some specific patches when used with SOA Suite and these need to be installed prior to running the installer.  Check the documentation for details. I am seeing an increasing number of customers taking up OEL, I think because of the attraction of a single company supporting the whole technology infrastructure stack, OS, AS and DB.  I expect to see more of this still in the future.

I have just been with a customer who was using Oracle Enterprise Linux 5.  Now this shouldn’t be any different from other Linux installations except for one minor problem, the SOA Suite...

Miscellaneous

Raising Money for the NSPCC

One of Oracles chosen charities in the UK is the NSPCC, the National Society for the Prevention of Cruelty to Children.  The NSPCC is the UK's leading charity specialising in child protection and the prevention of cruelty to children. It is the only children's charity with statutory powers, enabling it to act to safeguard children.  In the UK it is well known for running Child Line, a 24-hour helpline for children in distress or danger. Trained volunteer counsellors comfort, advise and protect children and young people who may feel they have nowhere else to turn. This year together with a number of other Oracle UK employees I will be participating in the Three Peaks Challenge in July - climbing to the top of Ben Nevis, Scafell Pike and Snowdon, all within 24 hours. This will be no mean feat - it involves making your own sandwiches as well as walking for 15 hours/30 miles up/down over 10,000 feet interspersed with 500miles of driving. I've been walking up and down stairs in the office for weeks in preparation. If you would like to sponsor me in this somewhat madcap activity, then donations can be made via PayPal, Credit or Debit Card at my Justgiving site.  If you’re a UK taxpayer, Justgiving makes sure 25% in Gift Aid, plus a 3% supplement, are added to your donation. So go ahead, sponsor me!

One of Oracles chosen charities in the UK is the NSPCC, the National Society for the Prevention of Cruelty to Children.  The NSPCC is the UK's leading charity specialising in child protection and the...

SOA Suite

Mastering Details with Flat Files

The Problem The native format builder wizard in the file adapter is great at reading flat file structures but doesn’t support reading more structured file structures.  Sometimes we need to read more complex structures such as master-detail records.  Let’s look at how we can use the file adapter to read structured file formats. For example imagine a laundry list file such as the one below: P,101,James L,Shirt,2,Starch L,Socks,6,De-odorise L,Pants,2,Remove Stains P,220,JoJo L,Sweatshirt,1,Handwash L,Shirt,3,No Iron L,Socks,2,Iron L,Pants,2,Steam Press L,Tie,1,Dry clean P,305,Ruth L,Skirt,7,Iron Pleats L,Socks,8,Iron To make it easier to see the record structure I have put boxes around the individual records.  This can be represented in XML as shown below The native format builder wizard would either treat each line as either the same type of record (“Multiple records are of single type”) or as two different types of record (“Multiple records are of different types”). In the first case we would have a single record type with four fields that fails to distinguish between people (lines pre-fixed with ‘P’) and laundry items (lines pre-fixed with ‘L’).  This does not reflect the fact that the two lines are of different types.  The second case, multiple records are of different types, is closer to what we want.  It creates two records types, the type being determined by the value of the first field.  The XML for this is shown below: However this case creates a list of two record types with no recognition that one record is nested inside the other. Comparing the two XML representations we can see what needs to be done; we need to add an Items element under Room and move the Item element to be under Items.  The question is how do we describe this using the native schema constructs supported by the file adapter. Creating a Master-Detail Records Native Format Schema The easiest way to deal with this is to use the native format builder wizard in the file adapter to create the basic outline of the records for us.  We choose “Multiple records are of different types” and provide appropriate names for the two records types and individual fields within the records. Having created the basic native format schema we can now edit it to be exactly the way we want it for the laundry list. We make the following changes in the generated native schema file: Replace the <choice> element by a <sequence> element.  In conjunction with other changes this will give us a list of elements of the same type, rather than of list of elements of two different types. <xsd:choice minOccurs="1" maxOccurs="unbounded"     nxsd:choiceCondition="terminated"     nxsd:terminatedBy=",">     … </xsd:choice> <xsd:sequence>    … </xsd:sequence> Replace the conditionValue attributes with startsWith attributes and add a comma to the end of the attribute values.  This will allow the native schema processor to identify the start of master records and child records.  We also add a maxOccurs attribute to the elements modified so that they can have multiple instances in a sequence. <xsd:element name="Item"     nxsd:conditionValue="L"> … </xsd:element> <xsd:element name="Item"     nxsd:startsWith="L,"     maxOccurs="unbounded"> … </xsd:element> <xsd:element name="Room"     nxsd:conditionValue="P"> … </xsd:element> <xsd:element name="Room"     nxsd:startsWith="P,"     maxOccurs="unbounded">     … </xsd:element> Add an <Items> element as a sequence to the <Room> element sequence and move the <Item> element to be inside the <Items> sequence. <xsd:element name="Room" nxsd:startsWith="P," maxOccurs="unbounded">     <xsd:complexType>         <xsd:sequence>             …             <xsd:element name="Items" maxOccurs="1">                 <xsd:complexType>                     <xsd:sequence>                         <xsd:element name="Item" nxsd:startsWith="L," maxOccurs="unbounded">                             …                         </xsd:element>                     </xsd:sequence>                 </xsd:complexType>             </xsd:element>         </xsd:sequence>     </xsd:complexType> </xsd:element> This gives us a native schema format that looks like this: Note that this is the structure that we were aiming for in the first place. Sample Code I have created a simple BPEL process that performs the following steps Read a laundry file using the wizard generated native schema format.  This creates an XML document with two distinct records types. Note that in the process I do not delete the file so it is important to make sure that if you re-use the same file with the process you need to update its timestamp! Read the laundry file again using the modified native schema format.  This creates an XML document with master-detail style records which reflects the actual format of the file. Note that I use a synch read with the filename provided by the inbound header from the previous file read.   I delete the input file after reading it. Write the laundry file using a pure XML schema to create an XML document file of the laundry list. Sample code is zipped up as a JDeveloper project and can be downloaded here.  In addition to the normal BPEL artifacts within the project, I have included a sample laundry file (laundry.sample.txt) in the top level of the project directory.  After deploying to a BPEL server the process will look in C:\FileTransfer\InBound for the input file, after processing the input file it is written to C:\FileTransfer\Processed and the output file is generated in C:\FileTransfer\OutBound.  Either create appropriate directories or edit the project to use new directories. Documentation This entry has used the facilities of the native format schemas in SOA Suite.  These are documented in chapter 7 of the Oracle® Application Server Adapters for Files, FTP, Databases, and Enterprise Messaging User's Guide. Good luck in using the adapters to process master-detail records!

The Problem The native format builder wizard in the file adapter is great at reading flat file structures but doesn’t support reading more structured file structures.  Sometimes we need to read...

SOA

Tuxedo Connections

Tuxedo Connections or the On Ramp to Tux Tuxedo can be considered as the original and purest service oriented architecture.  The key abstraction in Tuxedo is the service and everything is made to fit into the service mould.  It seems strange then that people think of Tuxedo as a legacy application.  Tuxedo is highly regarded by the senior management team in Oracle who view it as a key tool to support extreme transaction processing.  The question is then, how does this relate to the rest of the SOA world which does not subscribe to the Tuxedo technologies such ATMI, C++ or COBOL. The diagram below shows the different interfaces into and out of the Tuxedo world.  Lets look at them briefly and how they relate to the rest of the SOA world which is focussed on XML, SOAP and HTTP. Native Client Interfaces The native client interfaces to Tuxedo are the C, C++, .Net or COBOL client interfaces using ATMI (Application to Transaction Monitor Interface).  There is also a version of ATMI for Java, called Jolt.  These interfaces allow clients to invoke Tuxedo services and get responses.  They do not allow Tuxedo to invoke services in these clients except by listening on Tuxedo message queues, these interfaces are asymmetric. Legacy Client and Server Interfaces A Tuxedo domain can interface to other Tuxedo domains, treating their services as though they were its own services.  This capability is extended to other systems such as mainframe systems.  The external systems see Tuxedo services as native services and invoke them as they would any other service, similarly Tuxedo sees these external systems as native Tuxedo services and invokes them as it would any other service.  This provides relatively seamless integration between legacy environments and Tuxedo and allows either side to operate as a server to the other, in other words these interfaces are symmetric. Open System Interfaces In addition to treating legacy mainframe interfaces and other Tuxedo domains in the same way as local services Tuxedo can also do this for such open system standards as CORBA and java code running in WebLogic Server.  CORBA applications can invoke Tux services through the CORBA API and Tuxedo services can invoke CORBA objects.  The WebLogic Tuxedo Connector (WTC) extends the capabilities of Jolt to become fully symmetric in that EJBs in WebLogic can be invoked as services from Tuxedo. A Transactional Note Note that all the interfaces we have spoken about so far are transactional, as in they are part of the Tuxedo transaction infrastructure, invoking a remote mainframe transaction may cause an XA transaction to be started within Tuxedo.  When calling an EJB in WebLogic this also is part of the overall Tuxedo XA transaction infrastructure. Web Service Access to Tuxedo There are two alternative ways to get Web Services to access Tuxedo.  The most obvious is to use SALT (Service Architecture Leveraging Tuxedo) which exposes Tuxedo services as web services, and allows Tuxedo to invoke Web Services as though they were Tuxedo services.  This is a symmetric interface and takes care of all the XML to Tuxedo translations but it is not transactional.  The web service call is not part of the transaction.  A web service request to Tuxedo may cause a Tux transaction to be initiated, but webs services don’t currently provide a transactional context.  Similarly when Tux makes a call to a web service, that call is not part of any Tuxedo transaction. So what if you want transactionality and access to web services.  This is where the service bus comes in.  The Oracle Service Bus (OSB) takes advantage of the WebLogic Tuxedo Connector to provide a fast efficient and transactional interface to and from the Tuxedo world.  This allows a pipeline to make two seperate calls to Tuxedo as part of the same transaction.  Note that there are a couple fo wrinkles to making this happen and I will deal with those in a later post. Summary – Long Live Tux, King of Services Not only is Tuxedo the original service oriented architecture but despite being more than 20 years old, through SALT and WebLogic Tuxedo Connector it still speaks the modern lingo of service buses, Java, XML, SOAP and HTTP.  So if you have a Tuxedo investment don’t write it off, but look at how you can more easily make your Tux services available to the new fangled XML based web service world.

Tuxedo Connections or the On Ramp to Tux Tuxedo can be considered as the original and purest service oriented architecture.  The key abstraction in Tuxedo is the service and everything is made to fit...

SOA Suite

Oracle SOA Suite Developer’s Guide Published

Oracle SOA Suite Developer’s Guide My friend Matt Wright just pointed out to me that I hadn’t mentioned that our book was now available.  Thanks to the guys at Packt Publishing for guiding us through the process and publishing the book.  Our focus when writing the book was to provide a practitioners guide to implementing SOA using the Oracle SOA Suite.  As such we have set each component of the SOA Suite within the context in which might be used.  This seems to be the area that a lot of companies starting down a SOA path struggle with.  They are unsure what bits of technology to use for what. Dave Shaffer, Oracle VP for SOA, kindly wrote a foreword for us. Writing the book has been a huge undertaking for both Matt and myself.  During the writing of the book the 11g release was seriously delayed so we switched versions from 11 to 10.1.3 (the current production release).  Matt decided that he was sick of 365 days of rain and moved his family from England to Australia.  Finally Oracle purchased BEA and caused a re-evaluation of the Service Bus technologies. I am pleased with the way the book has turned out, and Matt and I learnt a huge amount as we prepared the book, both of us had to delve into area with we weren’t totally familiar.  Matt became a master of the Rules engine and I discovered the marvelous new deployment capabilities added to the 10.1.3.4 release. We have made every effort to match the book to the current production 10.1.3.4 release so get yourself a copy and give us some feedback.

Oracle SOA Suite Developer’s Guide My friend Matt Wright just pointed out to me that I hadn’t mentioned that our book was now available.  Thanks to the guys at Packt Publishing for guiding us through...

Fusion Middleware

How to Build a Product Suite

How to Build and Manage a Product Suite I was in Redwood Shores this week with a customer and we were lucky enough to have Thomas Kurian speak to us for an hour in a Q&A session.  One of the customers I was accompanying, Michael, asked a really useful question, well actually he asked several but I am only blogging about one of them.  Michael is in charge of his company’s largest software development that will redefine the types of service that can be offered by his company.  Quite naturally he is feeling a little pressured so his question to Thomas was not related to technical issues but to the philosophy of integrating different products into a consistent product stack.  Obviously Thomas has a great track record on this, WebLogic Suite combines products from Oracle, BEA and Tangosol in a single product stack; SOA Suite combines products from Oracle, BEA and Oblix into a single product stack etc. Thomas identified the following steps that can be applied when integrating products into a consistent product set: Group similar functionality together into Suites. This enables a focus on related pieces of functionality and avoids being overwhelmed by the sheer size of the product stack.  it also simplifies the messaging that has to be communicated to the market. Get the pieces to work together. Within a suite the emphasis is on making the components work well together, eliminating duplication of function. Pick up dependencies in a single way. Everyone should access the functionality in the same way.  This takes advantage of common abstractions and makes it easier for clients of the suite to take up new functionality in a seamless fashion. Suite pricing encourages big picture thinking Customers generally want several related pieces of technology.  Bundling them together into suites at a combined price focuses the development teams not just on their small piece of the puzzle but on the wider suite, giving them an incentive to make sure it all works together. Mandate In addition to the carrots mentioned above, force people to pick up functionality in a single way and to be consistent across components in the suite. In conclusion Thomas identified three principles that guide the above steps Unify – using suites Simplify – everyone accesses functionality in the same way Mandate – force everyone in your organization to play by the rules Thomas was adamant that Fusion Middleware would be more than a simple branding.  Over the last 5 years Thomas has moved the Fusion Middleware towards tighter and tighter integration.  The latest demonstration of this will come later this year with the release of SOA Suite 11g. Michael wasn’t looking for a silver bullet, but I think he did appreciate Thomas’ thoughts on this one.

How to Build and Manage a Product Suite I was in Redwood Shores this week with a customer and we were lucky enough to have Thomas Kurian speak to us for an hour in a Q&A session.  One of the customers...

Oracle

Integrated Cloud Applications & Platform Services