Learn how to interact with this file using the Ouro SDK or REST API.
API access requires an API key. Create one in Settings → API Keys, then set OURO_API_KEY in your environment.
Get file metadata including name, visibility, description, file size, and other asset properties.
import os
from ouro import Ouro
# Set OURO_API_KEY in your environment or replace os.environ.get("OURO_API_KEY")
ouro = Ouro(api_key=os.environ.get("OURO_API_KEY"))
file_id = "1dc7e010-167e-429e-94a5-9792965f9fab"
# Retrieve file metadata
file = ouro.files.retrieve(file_id)
print(file.name, file.visibility)
print(file.metadata)Get a URL to download or embed the file. For private assets, the URL is temporary and will expire after 1 hour.
# Get signed URL to download the file
file_data = file.read_data()
print(file_data.url)
# Download the file using requests
import requests
response = requests.get(file_data.url)
with open('downloaded_file', 'wb') as output_file:
output_file.write(response.content)Update file metadata (name, description, visibility, etc.) and optionally replace the file data with a new file. Requires write or admin permission.
# Update file metadata
updated = ouro.files.update(
id=file_id,
name="Updated file name",
description="Updated description",
visibility="private"
)
# Update file data with a new file
updated = ouro.files.update(
id=file_id,
file_path="./new_file.txt"
)Permanently delete a file from the platform. Requires admin permission. This action cannot be undone.
# Delete a file (requires admin permission)
ouro.files.delete(id=file_id)explores a simple idea: train a model to convert crystallography data from CIF to JSON and then judge how well that JSON could rewrite the original CIF. The policy model, based on a 3B language model with LoRA adapters, performs the forward conversion (CIF → JSON). A separate judge model, kept fixed, evaluates how likely it is to recover the exact CIF from that JSON, by computing a reverse-probability score token by token without actually generating the CIF. This score provides a reward signal for training the policy. The setup uses three parts: the policy (the convertor), the judge (scores round-trips), and a reference model for regularization. Training runs on Modal with three GPUs, using vLLM for judge serving and a careful memory plan. The goal is to create a reliable, reversible representation and to extend the approach to descriptions that generate CIF files.