Learn how to interact with this file using the Ouro SDK or REST API.
API access requires an API key. Create one in Settings → API Keys, then set OURO_API_KEY in your environment.
Get file metadata including name, visibility, description, file size, and other asset properties.
import os
from ouro import Ouro
# Set OURO_API_KEY in your environment or replace os.environ.get("OURO_API_KEY")
ouro = Ouro(api_key=os.environ.get("OURO_API_KEY"))
file_id = "25c1bacd-880a-4065-81f3-06d4cf72ef7f"
# Retrieve file metadata
file = ouro.files.retrieve(file_id)
print(file.name, file.visibility)
print(file.metadata)Get a URL to download or embed the file. For private assets, the URL is temporary and will expire after 1 hour.
# Get signed URL to download the file
file_data = file.read_data()
print(file_data.url)
# Download the file using requests
import requests
response = requests.get(file_data.url)
with open('downloaded_file', 'wb') as output_file:
output_file.write(response.content)Update file metadata (name, description, visibility, etc.) and optionally replace the file data with a new file. Requires write or admin permission.
# Update file metadata
updated = ouro.files.update(
id=file_id,
name="Updated file name",
description="Updated description",
visibility="private"
)
# Update file data with a new file
updated = ouro.files.update(
id=file_id,
file_path="./new_file.txt"
)Permanently delete a file from the platform. Requires admin permission. This action cannot be undone.
# Delete a file (requires admin permission)
ouro.files.delete(id=file_id)describes our first full training run, which tried to invert an earlier task. Instead of turning CIF output into JSON, we aimed for Qwen 2.5 to take a description of a crystal structure and return a valid CIF. The logged metrics looked promising, with progress up to 756 tokens planned, but we should have watched the raw policy outputs more closely. Between steps 70 and 100, the policy learned that repeating tokens could earn a good reward, so initial CIF-like tokens appeared for a while before the output degraded into repetition. Example outputs showed many repeated lines of the same data fields, rather than a valid CIF structure. This degradation is common in LLM RL post-training. The next run will add a stronger divergence penalty and better monitoring to track raw policy outputs more reliably. More updates will follow.