python-sdk
Python SDK for inference.sh - run AI apps, build agents, and integrate with 150+ models. Package: inferencesh (pip install inferencesh). Supports sync/async, streaming, file uploads. Build agents with template or ad-hoc patterns, tool builder API, skills, and human approval. Use for: Python integration, AI apps, agent development, RAG pipelines, automation. Triggers: python sdk, inferencesh, pip install, python api, python client, async inference, python agent, tool builder python, programmatic ai, python integration, sdk python
Why use this skill?
Integrate 150+ AI models with the inference.sh Python SDK. Build agents, automate RAG pipelines, and handle file uploads with ease.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/okaris/python-sdkWhat This Skill Does
The Python SDK skill allows users to seamlessly integrate the inference.sh platform directly into their Python applications. By acting as a programmatic interface, it enables the execution of a vast library of over 150+ AI models, the creation of stateful agents, and the construction of complex RAG (Retrieval-Augmented Generation) pipelines. The SDK supports both synchronous and asynchronous execution, streaming responses, and automatic file handling, making it a robust choice for developers looking to scale AI capabilities within their software stack.
Installation
To integrate this skill, use the command: clawhub install openclaw/skills/skills/okaris/python-sdk. Once installed, you will need to install the library via pip: pip install inferencesh. Ensure your environment is configured for Python 3.8+ and that you have a valid API key from inference.sh stored in your environment variables as INFERENCE_API_KEY to ensure secure and efficient authentication during runtime.
Use Cases
- AI App Integration: Directly call hosted models like Flux or Veo from within your backend services.
- Agent Development: Build sophisticated, stateful agents using the SDK’s session management capabilities, keeping workers warm and maintaining context across multiple requests.
- Automated Workflows: Create data processing pipelines that handle file uploads, model inference, and output parsing without manual intervention.
- RAG Implementation: Facilitate complex retrieval flows by managing inputs and outputs between vector databases and the inference.sh model endpoints.
Example Prompts
- "Use the inferencesh Python SDK to run the flux-schnell model and generate an image of a cybernetic cat from my prompt."
- "Show me how to initialize a stateful session with a 5-minute timeout using the inferencesh client."
- "Help me write a script that streams output logs while running a long-form video generation task on the Veo model."
Tips & Limitations
For production-grade applications, always prefer using environment variables over hardcoded API keys. When working with large files, leverage the upload_file utility for optimized data handling rather than manual binary stream manipulation. Note that while the SDK is highly versatile, execution is dependent on the availability of the target models on inference.sh. Ensure you monitor your session_timeout settings to balance performance and cost effectively when using stateful sessions.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-okaris-python-sdk": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: network-access, file-read, external-api
Related Skills
content-repurposing
Content atomization — turn one piece of content into many formats. Covers blog-to-thread, blog-to-carousel, podcast-to-blog, video-to-quotes, and more. Use for: content marketing, social media, multi-platform distribution, content strategy. Triggers: content repurposing, repurpose content, content atomization, content recycling, one to many content, multi platform content, cross post, adapt content, reformat content, blog to thread, blog to video, podcast to blog, content multiplication
product-changelog
Product changelog and release notes that users actually read. Covers categorization, user-facing language, visuals, and distribution. Use for: release notes, changelogs, product updates, feature announcements, versioning. Triggers: changelog, release notes, product update, version notes, what's new, feature announcement, product changelog, update log, release announcement, version release, product release, ship notes
logo-design-guide
Logo design principles and AI image generation best practices for creating logos. Covers logo types, prompting techniques, scalability rules, and iteration workflows. Use for: brand identity, startup logos, app icons, favicons, logo concepts. Triggers: logo design, create logo, brand logo, logo generation, ai logo, logo maker, icon design, brand mark, logo concept, startup logo, app icon logo
product-photography
AI product photography with studio lighting, lifestyle shots, and packshot conventions. Covers angles, backgrounds, shadow types, hero shots, and e-commerce image requirements. Use for: product photos, e-commerce images, Amazon listings, packshots, lifestyle photography. Triggers: product photography, product photo, packshot, e-commerce photography, product shot, product image, studio photography, lifestyle product, amazon product photo, product listing image, hero shot, product mockup, commercial photography
newsletter-curation
Newsletter curation with content sourcing, editorial structure, and subscriber growth strategies. Covers issue formatting, link roundups, commentary style, and sending cadence. Use for: email newsletters, link roundups, weekly digests, curated content, creator newsletters. Triggers: newsletter, email newsletter, newsletter curation, weekly digest, link roundup, curated newsletter, newsletter writing, newsletter format, subscriber growth, newsletter strategy, content curation, newsletter template