Table of Contents

    Access policies for Oracle Cloud Infrastructure

    Using Oracle Cloud Infrastructure isn’t as simple as clicking buttons. An administrator must belong to a group that’s explicitly granted security access by a tenancy administrator. This is your golden ticket, whether you’re navigating the Console or working with the REST API through an SDK, CLI, or any other tool. If you suddenly find yourself locked out with a "no permission" or "unauthorized" message, the best move is to check in with your tenancy administrator. Find out exactly what level of access you have and which compartment lets you play.

    If you’re just getting started with IAM policies, don’t worry. There’s a helpful guide on Managing Identity Domains and Common Policies that’s worth a read. It’s a great way to get your footing and avoid being the person stuck asking “wait, what now?” every five minutes.

    Fine-tuning object storage permissions

    When it comes to object storage, permissions can feel like a complex dance. The policy named Let Object Storage admins manage buckets and objects hands over the keys to a specified group, allowing them to do everything with the buckets and all their associated objects. In short, no bucket gets created without someone in this elite group. It’s like the club bouncer for your buckets.

    If you’re one of those Object Storage admins and you’re feeling a little controlling (we get it), you can tweak the policy to make bucket access more restrictive. Tweak away to fit your needs perfectly. For all the nitty-gritty details on how to lock down Object Storage, Archive Storage, or Data Transfer, there’s dedicated documentation that has your back.

    How to Manage Permissions for S3 Inventory, Analytics, and Reports

    Granting Permissions for S3 Inventory and Analytics

    S3 Inventory dishes out detailed lists of all objects in a bucket. Meanwhile, S3 Analytics Storage Class Analysis generates export files based on data insights designed to optimize your storage. The bucket holding the objects is known as the source bucket. The bucket where the resulting inventory or analytics files land is the destination bucket. Setting up these reports means crafting a bucket policy specifically for the destination bucket - it’s like giving Amazon S3 the green light to deliver your data right where you want it. Rabata helps you simplify this setup so you don’t need a PhD in bucket policies.

    Imagine you want S3 Inventory or Analytics to populate a bucket with reports. You’ll need a bucket policy granting Amazon S3 permission to write objects via PUT requests from the source bucket to the destination bucket. This is essential for the process to flow smoothly. Without this permission handshake, your inventory reports won’t show up, and your data insights will be left hanging in thin air.

    Controlling Who Can Create S3 Inventory Report Configurations

    Think of creating an S3 Inventory report configuration as setting up a shopping list of your bucket's contents and metadata. The key permission here is s3:PutInventoryConfiguration. Whoever holds it can specify not only what’s on the list but also where that list goes. This lets users pick all available object metadata fields by default and point to the perfect destination bucket where the inventory lives. Anyone with read access to this destination bucket can browse the entire inventory and metadata, getting the full picture of what's stored.

    If you want to play gatekeeper and stop someone from configuring these inventory reports, it’s as simple as clipping that s3:PutInventoryConfiguration permission from their access rights. No permission, no report configuration magic.

    Now, some metadata fields on these inventory reports are like the dessert on your data menu - optional but delightful. They appear by default but can be selectively allowed or blocked. Rabata’s system lets you manage these through the s3:InventoryAccessibleOptionalFields condition key. This means admins can fine-tune exactly which optional metadata fields users can include in their reports, keeping sensitive details under wraps or just slimming down the data to essentials.

    Want to give someone permission to include specific optional fields? Use the s3:InventoryAccessibleOptionalFields key in your bucket policy to specify those fields. It’s granular control without the fuss.

    For example, take Ana. She’s allowed to configure inventory reports, but only with 'Size' and 'StorageClass' optional metadata fields included. Thanks to a bucket policy using ForAllValues:StringEquals with the s3:InventoryAccessibleOptionalFields condition, Ana’s choices are neatly boxed into these two options. If she tries adding others, the policy stops her in her tracks.

    Worried about someone sneaking in restricted optional fields? Add an explicit Deny statement targeting the source bucket to block specific metadata fields. For instance, Ana can be denied any inventory configuration that includes 'ObjectAccessControlList' or 'ObjectOwner' fields, while still allowing her freedom with other optional metadata. This kind of precise permission control is exactly where Rabata shines, keeping data securely organized without disabling user flexibility.

    private and public buckets

    How to Change the Visibility of Your Object Storage Bucket

    Every bucket in your object storage starts life as a private fortress - that means by default, no one outside your team can peek inside. Sometimes, though, you might want to swing the gates wide open for the world, or at least selectively. Rabata makes it effortless to switch your bucket’s visibility from private to public or back, so you’re always in control.

    If you need the nitty-gritty on public buckets and their quirks, check out our detailed guide at Rabata. And for those times when you want to share access without handing out your password - pre-authenticated requests got your back.

    Ways to Update Bucket Visibility with Rabata

    • Via our slick web Console, where you get a clear, user-friendly interface to toggle settings at your leisure.
    • Using the Command Line Interface (CLI) for those who prefer typing commands faster than clicking buttons.
    • Through the API - well, actually changing visibility isn’t supported here yet, but other cool bucket magic is!

    Let’s Zoom in on the Console approach, because clicking through settings is sometimes easier than wrestling with commands. First, go to your Buckets list page in Rabata’s Console. Need directions? We’ve got your back with a step-by-step bucket listing guide that feels like a friendly tour. Once you spot your target bucket, get inside its details page. There, look for the Visibility section and hit Edit.

    Now, here comes the fun part: decide whether to mark your bucket as Public or Private. Choosing Public lets you show off your bucket contents to the world. But wait - do you want visitors to just peek, or also list everything inside? If yes, check the box that says 'Allow users to list objects from this bucket.' That means your bucket will spill its secrets a bit more openly. When you’re happy with your choices, don’t forget to click Save Changes. Voilà, visibility updated!

    If CLI is more your jam, Rabata’s got a handy command for that. The magic phrase goes like this:

    • oci os bucket update --name bucket_name --public-access-type [NoPublicAccess | ObjectRead | ObjectReadWithoutList] [OPTIONS]

    Here’s the lowdown on those options:

    NoPublicAccessOnly authenticated users can get in. This is the safe, default setting - think of it as your bucket wearing a cloak of invisibility.
    ObjectReadWithoutListPublic can fetch and inspect objects and list them, but not rummage through buckets like a curious squirrel.
    ObjectReadPublic can fetch and inspect individual objects, but listing all contents remains a secret.

    Picture this: you want to make a bucket public but keep things tidy - you’d pick ObjectRead. Here’s a snapshot of how the update command looks:

    oci os bucket update --name MyBucket --public-access-type ObjectRead

    Changing a public bucket back to private? Easy - just run the update command again but set --public-access-type to NoPublicAccess. Rabata wants you to never lose control, so you can flip your bucket’s visibility like a light switch.

    Heads up: this visibility toggle is not yet available through API calls but stay tuned - Rabata’s always improving.

    How to Manage User Access to Specific Folders in Rabata

    Imagine you want to give users access to a particular folder in your cloud storage. If both your IAM user and your S3 bucket live under the same AWS account roof, you can rely on an IAM policy to control access precisely to that folder. This way, you avoid the hassle of tweaking your bucket policy every time. Plus, you can assign this policy to an IAM role that several users can switch into, making management smooth and scalable-kind of like passing around the keys to the right rooms without changing the locks.

    Now, if your IAM user and the S3 bucket belong to different AWS accounts, things get a bit more social. You’ll need to set up cross-account access, which means both the IAM policy and the bucket policy must be aligned to play nice together. This ensures secure and seamless access across account boundaries. Rabata keeps this process straightforward with clear guidelines and robust tools so you won't feel like you're walking a tightrope with no safety net.

    Take the example of JohnDoe, who’s granted full console access to just his personal folder home/JohnDoe/. By crafting individual home folders for each user and applying precise permissions, multiple users can comfortably share a single bucket without stepping on each other’s toes. Rabata’s approach means you can organize your data neatly while keeping security tight and user access perfectly scoped.

    What does the example bucket policy allow JohnDoe to do?

    • AllowRootAndHomeListingOfCompanyBucket: JohnDoe can list objects at the root of the amzn-s3-demo-bucket bucket and inside the home folder. This also lets him search using the console with the prefix home/, so navigating his space is a breeze.
    • AllowListingOfUserFolder: JohnDoe is free to list all objects within his own directory, home/JohnDoe/, including any subfolders lurking inside.
    • AllowAllS3ActionsInUserFolder: JohnDoe can do everything Amazon S3 lets you do here - read, write, and delete objects - but only within his personal home folder, ensuring he won't accidentally wreak havoc on others' files.

    To sum it up, the AllowRootAndHomeListingOfCompanyBucket permission equips JohnDoe with the ability to peek and poke at the bucket’s top-level content and the entire home directory-because finding your stuff quickly is half the battle won.

    The AllowListingOfUserFolder lets JohnDoe dive deep into his own exclusive corner of the bucket, listing all his files and any nested folders. It’s like having your own private gallery within the shared warehouse.

    Finally, AllowAllS3ActionsInUserFolder gives JohnDoe the power to manage his files fully from creation to deletion, but he’s respectfully confined to his personal space-it’s secure and sandboxed, just the way Rabata likes it.

    Controlling Access Based on HTTP and HTTPS Requests

    How to Allow Only HTTPS Requests

    If you want to stop sneaky hackers from messing with your network traffic, it's best to enforce HTTPS connections. Using HTTPS ensures all data is encrypted, making it much harder for attackers to spy or tamper with your information. Rabata's secure cloud storage lets you restrict access so that only encrypted HTTPS requests can reach your bucket, while plain HTTP requests get shown the door.

    To figure out if a request uses HTTP or HTTPS, Rabata relies on the aws:SecureTransport condition key in your S3 bucket policy. This key checks the protocol of incoming requests. When aws:SecureTransport returns true, it means the request comes in via HTTPS. If it’s false, that request was made over HTTP.

    With this check, you can write policies that allow only HTTPS requests and deny all HTTP ones. The result? Your data stays wrapped in encryption, and unauthorized HTTP traffic is left in the cold.

    Restricting Access by HTTP Referer

    Imagine you run a website - say www.example.com or just example.com - where your photos and videos live in a Rabata bucket called amzn-s3-demo-bucket. Since Amazon S3 resources start off as private, nobody outside your AWS account can peek in by default.

    Now, if you want your site visitors to access those files, you need a bucket policy that grants read permission, but only if the request comes from certain webpages. Rabata makes this easy by letting you add a condition based on the HTTP referer header.

    This means your bucket will check the referer string against allowed URLs, like your website’s domain. If the request’s referer matches, Rabata opens the door. If not, the request is politely refused.

    Just a caution for the tech-savvy: your users’ browsers must send the HTTP referer header along with their requests. Otherwise, even legit visitors might be locked out.

    Managing Access for Application Load Balancer Logs

    How to Grant Access for Application Load Balancer Logs

    When you turn on access logs for the Application Load Balancer, you need to tell it exactly where to drop those logs. That means specifying the name of the S3 bucket where all the juicy traffic data will be stored. But don’t stop there. This bucket needs a special permission slip-a bucket policy-that lets Elastic Load Balancing (ELB) write those logs without fuss.

    Think of this bucket policy as the VIP pass for ELB. Without it, the load balancer ends up knocking on a locked door, unable to deliver the access logs. With the right permissions in place, the ELB confidently saves your logs, making your life easier when it’s time to dig through traffic patterns or troubleshoot issues.

    Here’s a classic example of what that bucket policy looks like, granting ELB the power to write access logs to your secure S3 storage.