Author's Description
A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library.
Key Specifications
Supported Parameters
This model supports the following parameters:
Performance Summary
AlfredPros: CodeLLaMa 7B Instruct Solidity, a finetuned 7 billion parameter Code LLaMA model for Solidity smart contract generation, consistently performs among the fastest models and offers highly competitive pricing across all benchmarks. Its speed and cost efficiency are standout features. However, the model exhibits significant limitations in accuracy across most benchmark categories. It achieved 0.0% accuracy in Coding, Instruction Following, Ethics, and General Knowledge, indicating a fundamental inability to perform these tasks correctly. While it showed some capability in Email Classification (32.0% accuracy) and Reasoning (40.0% accuracy), these scores are still relatively low, placing it in the 6th and 26th percentile respectively. The model's primary strength lies in its operational efficiency (speed and cost), rather than its accuracy or understanding of diverse tasks. Its description as a Solidity-focused model suggests its poor performance in general benchmarks might be expected, but the complete lack of accuracy in core coding and instruction following tasks is a notable weakness.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.8 |
Completion | $1.2 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Featherless
|
Featherless | alfredpros/codellama-7b-instruct-solidity | 4K | $0.8 / 1M tokens | $1.2 / 1M tokens |
Parasail
|
Parasail | alfredpros/codellama-7b-instruct-solidity | 8K | $0.7 / 1M tokens | $1.1 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|