Cloud Security Perspectives and Insights

Recent Posts

Cloud Infrastructure Security

Why I Love Working with Data Safe and Oracle Database 21c

One of the great things about providing a cloud service is how easy it is to update the service with new features, and Oracle Data Safe is no exception. For example, this week we've added support for Oracle Database 21c. Since we released the service at OpenWorld San Francisco last year, we’ve seen enormous growth and the customer response has been fantastic. If you are running a database in the Oracle Cloud and aren’t already using Data Safe, you really should try it out – Data Safe is included with all of our in-cloud Database as a Service offerings – including Autonomous Database and Exadata Cloud Service – at no additional cost. If your databases are running on-premises, you should take a look at Data Safe to see if you'd like to use it for those as well! But, back to my main topic – the ease of updating a cloud service. Comparing the process for enhancing a product or fixing product issues for a cloud service like Data Safe with the same process for an on-premises product is like night and day. For on-premises products, enhancements are scheduled and rolled into a delivery vehicle – usually quarterly or, if it’s a major enhancement, the upcoming annual release. Depending on where in the development cycle the enhancement request comes in, It can take months or even years to bring a new feature to our customers. And the QA cycles before release are long and complex because Oracle is run in so many different server/operating system environments With Data Safe, we roll out fixes and updates every few weeks – it’s a continuous cycle of improvement. Usually these are small improvements – make something easier to understand, fix a typo in some text on screen, add a new sensitive data format to the over 125 existing formats, or a new masking format capability like group-based masking – we are constantly moving the usability and quality of the service higher. Every now and then, it’s a “hot fix” – we spot an issue that is impacting multiple customers and that needs to jump the normal development sprint cycle. In one recent case a report came in about how we were handling large objects from one customer, was confirmed by another customer about eight hours later, and was fixed – with the fix rolled into production for ALL Data Safe customers – less than a day later. This is what I love about cloud services – how quickly we can fix or improve things, and how confident we can be rolling those changes out since the deployment environment is homogenous and controlled. Some recent examples– Automated registration for Autonomous Databases.  I love the Autonomous Database because it lets me get down to business quickly – I don’t have to worry about setting up encryption, separation of duties, patching – the everyday tedium of securing a database. It’s all done for me. But, because it’s all done for me, setting up monitoring tools like Data Safe used to mean I had to figure out what someone else had done for that automation so I could connect my tools into the system. We had several customers who commented on the difficulty of registering an Autonomous Database with Data Safe, so we created the “Easy Button” – the registration is now automated, with network ingress rules, certificate import, credentialing all handled in the background. And we’re working with the Autonomous Database product managers to make things even easier in upcoming releases. But the point is, this great automation that really made a significant difference in the ease of use for Data Safe happened in just a couple of weeks from identifying the issue. And for our customers, that “Easy Button” just appeared on their Autonomous Database console. Federated Logon support. Our initial release of Data Safe required local accounts. During our testing and limited availability program this didn’t seem like a significant barrier to adoption -but once we had Data Safe generally available we received feedback from several customers that they preferred to only use federated identities, no local logins. Here again, in a few short weeks we had the solution developed, tested, and pushed out to our customers. So one day, the requirement for local logins just went away. Private IP address support. Another project we are working on is removing the requirement for a public IP address. The OCI networking team partnered with us on this to create a new network construct called the “Private Endpoint” that allows our customers to grant direct access to Data Safe without having to route that access through a public IP address.  Limited availability for this has been in progress for a few weeks, and so far everyone loves it. One day soon, our customers will just see this new capability appear for them to use with no need for them to apply a patch, install software, upgrade their hardware. Or, our most recent change – Oracle Database 21c. With Data Safe, we are able to support Database 21c on the same day it is released! It just doesn’t get much better than this.

One of the great things about providing a cloud service is how easy it is to update the service with new features, and Oracle Data Safe is no exception. For example, this week we've added support for...

Cloud Infrastructure Security

Security Insights for your web apps with OMC Log Analytics

1. Introduction Oracle Cloud Infrastructure (OCI) Web Application Firewall (WAF) is an Oracle Cloud Service that protects your web applications against threats. Logs are available within the WAF Service. In this blog, we’re going to leverage those logs in order to build a comprehensive dashboard with Oracle Management Cloud (OMC) and get insights on what’s happening in the Web Application from a security perspective. The final result of this blog will be the dashboard shown below, including tabs for Activity Overview, Top 10 OWASP Threats, and detected events from several sources.  An exported version of this dashboard will be available at the end of this blog for you to import into your own OMC environment.   In this article, we will see how to forward WAF Logs to an OCI Object Storage Bucket, configure an OCI Event Service to trigger an OCI Serverless Function, utilize REST APIs to generate OAuth tokens to upload the logs into OMC Log Analytics, and finally import the above mentioned dashboard.. As a prerequisite, you should be familiar with OMC, IDCS, OCI and rest APIs. You will need an Oracle cloud account already provisioned with a specific compartment. If you don't already have an account, you can sign up for Oracle Cloud's Free Tier. Let’s proceed with my component called omcwaf_compartment, a WAF policy configured in that compartment, an OMC tenant, an OCI Object Storage created in the compartment and an IDCS account. You will need to make sure that you have admin access to all the accounts. Before starting, let’s prepare all the needed details from OCI, OMC, IDCS, WAF policy and OCI Object Storage.  You may find it useful to put the collected details along with their step number into a text editor for easy reference in later steps. All the needed resources to complete this integration are available for download. 2. COLLECT INFORMATION 2.1 From OCI 2.1.1. Go to Administration > Tenancy details and copy the tenancy OCID  2.1.2.  In Administration > Tenancy, pick the Region Name. Go to Administration > Region Management save the corresponding Region Identifier 2.1.3. In Administration > Identity > Users, create a new user for the integration between the WAF Log and the OCI Bucket. Add the API Public Key and copy the Fingerprint.  2.1.4. Copy the OCID 2.1.5. Click on Customer Secret Keys, and generate a new secret key. Save the secret key in a safe place 2.1.6. Copy the Access Key 2.1.7. In Administration > Identity > Groups, create a new group and add the user previously created on this group. Copy the group OCID 2.1.8. Copy the group name.   2.2 From OMC 2.2.1. Go to Administration > Agents, click on the navigation button on the right then select Download Agents.  Select  Gateway as the Agent Type, then copy TENANT_NAME from the bottom of the page 2.2.2. Copy the OMC URL from the browser URL bar ending with oraclecloud.com  2.3. From IDCS From OCI, Go to Identity > Federation , click on OracleIdentityCloudService. Copy the IDCS Console URL.  If you are not federating IDCS identity for OCI, you can obtain your IDCS Console URL when you log out of OMC.  It should have the format of https://idcs-<guid>.identity.oraclecloud.com 2.4. From IDCS 2.4.1. Go to Security > WAF Policies the click on the policy already created. Copy the Policy OCID. 2.4.2. Copy the CNAME Target  2.4.3. Copy the domain name of the target application  2.5. From Object Storage 2.5.1. Go to Object Storage and click on the Bucket you already created.  Copy the OCID 2.5.2. Copy the Bucket name 2.5.3. Copy the namespace 2.5.4. Click on the compartment and copy the compartment OCID 2.5.5. Copy the compartment name   3. FORWARD WAF LOG TO OCI BUCKET In order to forward WAF Logs to your OCI Bucket your created previously, you should create an SR with Oracle Support.  3.1. Set IAM Policy The user created in step 2.1.3 must have write permission on the bucket.  To do so, we need to grant privileges on the group that contain this user. 3.1.1. Go to Administration > Identity > Policy and create the below policy statement:  allow group <group_name_step_2.1.8> to manage object-family in compartment <compartment_name_step_2.5.5> 3.1.2. Copy the policy OCID 3.2. Raise a SR with Oracle Support Once the policy is set, raise an SR on your Oracle WAF Portal support and provide the following information: •    Domain name of the application (step 2.4.3), and additional domain name if applicable.  •    Access Key (step 2.1.6)  •    Secret Key (step 2.1.5) •    WAF Policy OCID (step 2.4.1) •    Bucket Name (step 2.5.2) •    Bucket OCID (step 2.5.1) •    Namespace (step 2.5.3) •    Tenancy OCID (step 2.1.1) •    Compartment OCID (step 2.5.4) •    Policy OCID (step: 3.1.2) •    Region identifier (step 2.1.2) •    Bucket Region. •    Upload Prefix: "%{+YYYY}/%{+MM}/%{+dd}/%{[log_type]}" The implementation should take a few days before seeing the logs on your OCI Bucket. Once completed, you should see logs arriving from WAF to your OCI Bucket: 4. SET UP OAUTH FOR OMC Here, we are going to create a client application, that uses a token to connect into OMC. This saves you from providing your username and password to authenticate the function. By granting the OMC Admin role to the client application, the application will be able to upload logs to OMC LA. 4.1. Obtain OMC Access Token Connect to IDCS and search to OMCEXTERNAL_<your_OMC_tenant> application and click on it. Click on generate access token, a generate Token popup should open. Select Customized Scope and Invoke IDCS APIs Download the token. This token will be used in the next steps.   4.2. Create an application for OAuth 4.2.1. Create a JSON file named newClientApp.json with the following content. The name, displayName field can be customized, but note that the name must end with _APPID Replace <OMC_URL> by the one in step 2.2.2. {   "name": "APPOMC_SERVICEAPI_APPID",   "displayName": "APPOMC_SERVICEAPI",   "description": "Test client for serviceapi",   "isAliasApp": false,   "active": true,   "isOAuthClient": true,   "clientType": "confidential",   "allowedGrants": [     "client_credentials"   ],   "allowedScopes": [     {       "fqs": "https://<OMC_URL>/serviceapi/"     }   ],   "isOAuthResource": true,   "accessTokenExpiry": 86400,   "audience": "https://<OMC_URL>",   "scopes": [     {       "value": "/serviceapi/"     }   ],   "basedOnTemplate": {     "value": "OPCAppTemplateId"   },   "serviceTypeVersion": "1.0",   "serviceTypeURN": "OMCEXTERNAL",   "schemas": [     "urn:ietf:params:scim:schemas:oracle:idcs:App"   ] } 4.2.2. Run the following command to create the application: Curl -X POST https://<IDCS_DOMAIN>/admin/v1/Apps -H ‘Content-Type: application/json’ -H “Authorization: Bearer <OAuth_Access_Token>” -d “@newClientApp.json” <OAuth Access Token> is the format token value below you saved in the file of the Step 4.1: {"app_access_token":"<OAuth Access Token>”} Replace <IDCS DOMAIN> by the domain got on the step: 2.3. Domain should be like: https://idcs-xxxxxxxxxxxxxxxxxxx.identity.oraclecloud.com From the response, save the <client secret>, the <id> and the <name>, we will use them later. 4.3. Grant OMC Admin Role to Client App On IDCS, click on Application, then your OMCEXTERNAL_<your_OMC_instance> instance Click on Application Roles and assign the Application previously created to OMC Administrator 5. SET UP FUNCTION ENVIRONMENT 5.1. Prerequisites Group and user we are going to use can be the same as the one created in step 2.1.3 5.1.1. Create a VCN and a subnet on your compartment. The VCN must egress via either NAT Gateway, Internet Gateway of Service 5.1.2. Create a policy in the root compartment with the following statements: Allow service FAAS to use virtual-network-family in tenancy Allow service FaaS to read repos in tenancy 5.1.3. As the user created in step 2.1.3 is not a tenancy administrator, so add the following statement: Allow group <group-name> to manage repos in tenancy Allow group <group-name> to read metrics in tenancy Allow group <group-name> to read objectstorage-namespaces in tenancy Allow group <group-name> to use virtual-network-family in tenancy Allow group <group-name> to manage functions-family in tenancy Allow group <group-name> to use cloud-shell in tenancy Replace the <group-name> by the one on step 2.1.8 Note: If necessary, you can restrict these policy statements by compartment 5.2. Create a function 5.2.1. On OCI Console, go to Developer Services and click on Functions 5.2.2. Select your compartment, then click on “Create Application”, let’s name it load-waf-logs-app. Select the VCN and the subnet previously created on the prerequisite part then click on save. 5.2.3. Once the Application is created, click on it and follow the Getting Started steps on the left side of the page using the Cloud Shell Setup: 1.    Launch Cloud Shell 2.    Set up fn CLI on Cloud Shell  3.    Update the context with the function’s compartment 4.    Update the context with the location of the Registry you want to use.  5.    Already generated in obtain OAuth OMC token 6.    Log to the registry. Note: use the user created previously, and the OAuth token.  7.    Verify your setup Stop the Getting Started steps here, and continue with the following instruction: 5.2.4. Create a loadlogs python function by entering:  fn init --runtime python loadlogs        A directory called loadlogs is created with 3 files: func.py, func.yaml, requirements.txt  5.2.5. Edit the file requirements.txt to contain these three lines: fdk requests oci 5.2.6. Edit the file func.py and replace it with the ODU python code available on the resources. 5.2.7. Deploy the function by running: fn -v deploy --app load-waf-logs-app 5.3. Create function parameters Click on the application load-waf-logs-app, click on Configuration on the left menu then add the following parameters: •    apiuser: is the application name chosen in the step 4.2 ending in _APPID •    apipwd: is the Application client secret saved at the end of the step 4.2  •    idcsurl: is the IDCS domain URL used at the end of the step 4.2  •    input_bucket: is the bucket name from the step 2.5.2 •    logSourceName: must by WAF_LOGS since it’s the logSourceName used on the exported resources of this blog •    omcurl: is the OMC URL from the browser URL bar in step 2.2.2 •    uploadName: use it to help isolate initial test uploads within Log Analytics but leave the parameter with an empty value or remove it when going to production Note: idcsurl and omcurl values should NOT include a trailing "/"   5.4. Create a Dynamic group for the Function In order to use other OCI Services, the function must be part of a dynamic group. To do so, from OCI console, go to Identity > Dynamic Group and create a new dynamic group fn_oci for example. On the matching rules, add the following statement by using the correct compartment OCID from step 2.5.4: ALL {resource.type = 'fnfunc', resource.compartment.id = 'ocid1.compartment.oc1..aaaaaxxxxx'} 6. SET UP EVENT RULE 6.1.  Bucket Storage configuration 6.1.1. Go to your Object Storage bucket where WAF Logs are stored, and enable Emit Object Events options: 6.2. Create IAM Policy The dynamic group previously created need to manage objects within your tenancy  To do so, go to identity > Policies and add the following statement: allow dynamic-group <name_choosen_in_step_5.4> to manage objects in tenancy Note: If necessary, you can restrict these policy statements by compartment. 6.3. Create an event rule From your OCI console menu, go to Application Integration then Events Service. Created a new rule as following: Event Matching: Event types Service Name: Object Storage Event Type: Object – Create Attributes: bucketName: bucket name from the step 2.5.2   Actions: -    Function application need to be called.    7. CONFIGURE OMC  7.1. Import Log Parsers   Download the Log Parsers export from the resources. Each Parser is a .zip file which contains the content.xml From OMC menu, go to Log Analytics then click on Administration home.  Click on the gear icon on the top right corner then click on import configuration content. Select your parsers. 7.2. Import Log sources  Download the Log Source export from the resources. and select the .zip file which contains the content.xml  From OMC menu, go to Log Analytics then click on Administration home.  Click on the gear icon on the top right corner then click on import configuration content. Select the .zip file. 7.3. Import dashboard Download the Dashboard json export from the resources. Launch the following curl command to import the dashboard into your OMC instance: curl -X PUT https://xxxx.omc.ocp.oraclecloud.com//serviceapi/dashboards.service/import -H 'cache-control: no-cache' -H 'Content-type: application/json' -u 'omc_username' --data @/path/to/exported/dashboard.json -o /tmp/import_output Use your OMC credentials. 8. RESOURCES Python function Log Source Parser 1  Parser 2 WAF Dashboard   9. FINAL RESULT The configuration is now completed. New log files arriving on the Object Storage Bucket will be uploaded to OMC under WAF_LOGS log source and start populating   OCI WAF dashboard as below:  A Global Overview summarizing the global access requests Map, by Country, by URL, by Application by Code Error, by Code Error by Time. it gives also insight on Security Rules triggered, by IP Address and by Time.    A second tab giving more insights on the OWASP Top 10 threats:  A third tab with details on all Threat Intelligence Feeds detection Another tab giving all details about detections based on Access Rules   Finally, a tab with details on all threats detected and blocked by the Javascript Challenge feature I hope you found this blog helpful. Here are a couple of next steps to help you get started:  If you don't have an Oracle Cloud Infrastructure account, try our Free Tier. To try the dashboard, download the json export from the resources.

1. Introduction Oracle Cloud Infrastructure (OCI) Web Application Firewall (WAF) is an Oracle Cloud Service that protects your web applications against threats. Logs are available within the...

Application Security

How frequent static application testing finds potential vulnerabilities

  For this posting, I would like to introduce my joint guest author Naveen Gupta, who is a Principal Security Engineer in the SaaS Cloud Security (SCS) organization.   Oracle has a long-standing, secure development product lifecycle that is a core component of the Oracle Software Security Assurance (OSSA) program. OSSA is Oracle’s methodology for building security into the design, build, testing, and maintenance of its products, whether they are used on-premises by customers, or delivered through Oracle Cloud. One of the requirements of the OSSA program is to evaluate Oracle functionality throughout the product lifecycle using static analysis tools. As mentioned in this Oracle blog, the Oracle SaaS Cloud Security (SCS) organization completes security testing during the DevSecOps cycle for SaaS applications such as Oracle Fusion Cloud. Static application security testing (SAST) is a white-box method of testing. SAST examines the source code to find software flaws and weaknesses that can lead to security risks. These risks are defined by various governing bodies and standards like OWASP, CWE, NIST, SANS, and PCI. DevSecOps aims to embed security into every part of the development, delivery, and operations processes. As most security vulnerabilities get introduced at the time of the coding, it is essential to identify and fix them at the earliest possible stage of the development cycle. The SCS team completes static analysis during the coding stage. We use SAST tools to analyze the source code of the application and bytecode without executing the application. Benefits of Using SAST Tools in SaaS When it comes to SaaS, secure coding is non-negotiable. Cloud service providers simply cannot afford to implement insecure applications and software systems. In a typical SaaS environment, the software development lifecycle (SDLC) works with the traditional CI/CD (Continuous Integration/ Continuous Deployment) DevOps model. There are multiple advantages that SAST brings to the SDLC for Oracle SaaS applications. Some of these are: (a) SAST tooling examines code in detail in the repository, thereby reducing the time and personnel required to identify potential security defects. Automated SAST tools are faster and can examine code more frequently, which is the very essence of a CI process. (b) Conventional threat modelling cannot anticipate every possible technique that attackers can use  to exploit the vulnerabilities that can exist in software. These vulnerabilities exist because even smart developers can make coding errors that cause vulnerabilities in an application. The use of SAST tools during software development therefore acts as a reliable defense against common application threats and coding errors. (c) SAST tools are integrated into the SDLC for Oracle SaaS. SAST helps to reduce the security risk in the application by enforcing checks at different phases. For example: (d) During the development phase, the applications developers incorporate SAST into their development tooling (with integrated development environment (IDE) plugins) and workflow. (e) At the build phase of the DevSecOps model, SAST tools are integrated into the software engineering system during CI/CD execution. (f) Before deployment, security teams use SAST tools to scan applications for security vulnerabilities. (g) In addition, some SAST tools can integrate with source repositories and automatically report vulnerabilities to defect tracking systems. (1) SAST tools are executed early in the SDLC, minimizing the risk of critical or high vulnerabilities getting into a deployed SaaS application. (2) SAST results are considered as evidence artifacts for SaaS applications that must comply with industry security audits like the Federal Risk and Authorization Management Program (FedRAMP) or Payment Card Industry Data Security Standard (PCI DSS). Types of Vulnerabilities Found with SAST Security vulnerabilities that we identify during the phases of the DevSecOps model often fall into the following types: - Input validation and representation - Application Programming Interface (API) abuse -  Authentication - Authorization - Security features - Errors - Code quality - Encapsulation - Auditing and logging   Figure 1: SAST occurs during the (Plan, Code, Build, Test) phases of the DevSecOps cycle SAST tools work directly on the source code, using an inside-out approach to perform security testing. SCS analyzes the results from the scan reports for multiple vulnerabilities to identify security issues. We use in-memory graphs to identify any untrusted data entry points (sources) and the point where the vulnerability (sink) manifests during code execution. What do SAST tools find? During SAST, we analyze for these vulnerabilities: (a) Buffer overflow vulnerabilities that involve writing or reading more data than a buffer can hold (b) Mistakes, weaknesses, and policy violations in application deployment configuration files like web xml (c) Security violations by checking on dynamic HTML content that includes Java Server Pages (jspx), Javascript (js), Java Server Faces (jsff) files, etc. (d) Time-of-check to time-of-use (TOCTOU) issues that can result in potentially dangerous sequences of operations (e) Vulnerabilities that involve tainted data (user-controlled input) put to potentially dangerous use; for example, issues like injection or cross-site scripting (XSS) (f) De-references of pointer variables that are assigned the null value (g) Dangerous flaws in the structure or definition of the program; for example, violations in secure programming practices such as deprecated code functions, and objects not defined as static or final when required (h) Dangerous uses of functions and APIs at the intra-procedural level; for example, unsafe calls that trigger buffer overflows, format strings, and execution path issues How SCS uses SAST Tools The Security Testing Services (STS) team provides code-scan services using SAST tools for various Oracle SaaS applications. The team helps with analysis of the identified security issues. The team also maintains a repository of scan artifacts in a centralized, role-based access control (RBAC) server for audit and review purposes. During result analysis, a security issue is classified as follows: In addition to running SAST tools, the SCS team works on researching and implementing industry-best practices to reduce false positive issues. The team also trains developers on how to use SAST tools and analyze the results. Development teams that are skilled in using SAST tools can find and fix actual problems faster than teams who must spend additional time in understanding the tool and scraping through false positive results. Millions of lines of code: Automation to the rescue The SCS team builds and deploys SAST automation as part of the Automated SaaS Cloud Security Services (ASCSS) infrastructure at Oracle. We develop this automation to be integrated with existing SaaS applications and upcoming SaaS micro-services.   In Oracle SaaS, we integrate on-demand scan processes with our central build orchestration system. The integrated automation suite triggers scanning jobs on-demand for a given SaaS application such as Oracle Fusion. In addition, for next-generation SaaS micro-services, we added automation into the code scan of a service as part of the Continuous Integration process (see Figure 2).   Figure 2 Example integrated Code Scan report in CI pipeline How do we deal with false positives? Historically, most SAST tools, based on their purpose, generate a lot of false positives. Independent SAST analysis of source code performed by third parties with only indirect knowledge of the specific applications has limited value. Fully leveraging SAST tools require in-depth understanding of how the application is architected and functionally operated in a production environment. Some of the recommended practices to overcome the challenge of false positives is to perform the following tasks: (1) Create custom rules with validations to reduce false positives and apply during the scan time (2) Apply a filter file containing a list of non-issue categories during the scan time (3) Create a set of visibility filters to hide false positives from the audit view While custom rules and scan-time filters remove the issue completely from the scan result file, visibility filters are used to hide the issue from the audit view. SCS follows a set of industry practices for SAST tools to hide the false positive issues while performing the audit for SaaS properties.   Conclusion Application security testing is fundamental to Oracle and all SaaS applications as one of our core DevSecOps principles. It is an engineering area that invokes significant passion and results from security minded engineers. One of the recommended engineering practices is to always have automated SAST testing as a component of the coding phase of a DevSecOps model. The use of SAST by the SCS team is another example of Automated SaaS Cloud Security Services (ASCSS) infrastructure at Oracle. We will continue to provide additional examples of the DevSecOps processes and tools that we use in Oracle SaaS Cloud Security in future blog posts. We welcome your feedback and questions and we will continue to share content and posts based on your requests.    

  For this posting, I would like to introduce my joint guest author Naveen Gupta, who is a Principal Security Engineer in the SaaS Cloud Security (SCS) organization.   Oracle has a long-standing, secure...

Database Security

How much Database Security is Enough? Know where to start

We often talk about the Maximum Security Architecture (MSA), but the reality is that not every database needs that level of protection. I thought it might be worth spending some time on what a baseline security posture for the Oracle Database should include – what the Minimal Security Architecture should be. Once we know the maximum and minimum, then we can think of database security on a sliding scale, with your database’s security controls adjusted to reflect the value of the data contained within the database, and your organization’s willingness to accept risk to that data.                   We like to see these seven simple things that can be done for ANY Oracle database, including Oracle Standard Edition, without any additional-cost licenses.  Adjust your configuration to remove unnecessary risk Apply security patches in a timely manner Practice good password discipline Reduce account privileges wherever possible Know your data Audit security-relevant activity Encrypt database network traffic These seven baseline security practices form the foundation for follow-on security controls that increase the security posture (and decrease risk) all the way up to the Maximum Security Architecture. Without them, adding additional technical controls may improve security, but it will not result in a truly secure system.  Adjust your configuration to remove unnecessary risk. There are hundreds of database parameters, and many of those impact the security posture for the system. Oracle provides the Database Security Assessment Tool (DBSAT) to help you evaluate your configuration and identify settings that may introduce additional risk. DBSAT is simple to download and run, usually producing usable reports within minutes.  If you are running databases in the Oracle Cloud, you can also use Oracle Data Safe (included with your Database as a Service subscription) to perform the same types of checks DBSAT does for on-premises databases. Apply security patches in a timely manner. Oracle releases security patches quarterly. With each release, we also provide guidance on the type of vulnerabilities being mitigated in the patch, the attack vector/complexity, and the severity. The reality is that once we release a patch, it isn’t long before malicious actors begin reverse engineering the patches to learn more about vulnerabilities and how they can be exploited. In some cases, the gap between our release of a patch and the availability of automated exploits can be as little as a few days. As every experienced IT professional knows, patching carries its own operational risk, and it’s always a balancing act between testing patches and applying them quickly. The important thing is to evaluate each patch and make a decision on your timeline for applying the patch. The decades-old DBA mantra of “if it ain’t broke, don’t fix it” doesn’t match up with modern risk evaluation! If you are not already subscribed to receive notifications of new critical patches, you can do so here. Practice good password discipline. This sounds so very basic that you may think it doesn’t need to be said, but having evaluated hundreds of production databases in real customer environments I can tell you that it IS something you should be paying attention to. The temptation to create accounts with passwords that don’t expire, and without those annoying complexity  requirements or limits of failed logins seems to draw people in. Remember that most database breaches involve compromised account credentials, and don’t neglect this most basic of security checks. DBSAT (or Oracle Data Safe if your database is in the Oracle Cloud) will help you here, letting you know which users have non-expiring passwords, passwords without complexity checks, and accounts that don’t automatically lock after a certain number of failed logins. Reduce account privileges whenever possible. Most database breaches involved compromised account credentials (sound familiar?). That means that you want to reduce the damage a compromised account can do whenever possible. This can be something as simple as reporting on the privileges/roles an account has and doing a manual review. If you are running the Enterprise Edition of Oracle Database you can use the Privilege Analysis feature to report on privileges an account uses, as well as privileges an account has that are not being used. Those unused privileges are excellent candidates for elimination. It’s always good to be cautious before removing privileges from a user, so I’ll usually take a two-step approach, running privilege analysis for a few months to identify unused privileges and then auditing the use of those privileges for several more months just to be sure the user doesn’t just use them infrequently. Know your data. Many have said that “data is the new oil” – but all data is not created equally. Some data has a higher value (with attendant higher security risk) than other data. Know what types of sensitive data your database holds, and almost as important, how much of that sensitive data there is. DBSAT can help here, with its sensitive data discovery module.  If you are running databases in the Oracle Cloud, you can also use Oracle Data Safe’s sensitive data discovery module.  The baseline security posture we’re discussing here is appropriate for databases with very low risk, databases that don’t contain a lot of sensitive data. The more sensitive data, and the more value that data holds, the more you should be doing to protect it. The baseline security posture we’re discussing here is appropriate for databases with very low risk, databases that don’t contain a lot of sensitive data. The more sensitive data, and the more value that data holds, the more you should be doing to protect it. Audit security-relevant activity. Just as important as knowing the types and quantity of sensitive data in your database is knowing how that data and your database are being accessed. The Oracle Database has superb auditing capabilities, and we improve them with every release. You should be auditing database login events, changes to user accounts, grants of database privileges, and changes to database schema. You may hear “I can’t enable auditing, the performance impact is too high” – but if you think about the things I’m saying to audit you’ll see that these are low frequency, high value operations. They shouldn’t be happening often in most databases, and therefore the performance impact will be minimal. Without an audit trail, your ability to detect malicious activity is severely compromised, and your ability to support a forensic investigation is almost non-existent. Encrypt database network traffic. Encryption of data in motion is standard now - websites that don't use HTTPS are the exception, not the rule. The same should be true for databases. Enabling encryption in an Oracle Database is as simple as a single line in a configuration file that will enable Oracle Native Network Encryption (NNE).  These seven simple steps establish a reasonable security baseline and are the foundation you can build on as you increase your security posture towards the Maximum Security Architecture. If you’d like to learn more about Oracle Database Security, please take a look at our third edition of “Securing your Database – A Technical Primer”.    

We often talk about the Maximum Security Architecture (MSA), but the reality is that not every database needs that level of protection. I thought it might be worth spending some time on what a...