Author's Description
A finetuned 7 billion parameters Code LLaMA - Instruct model to generate Solidity smart contract using 4-bit QLoRA finetuning provided by PEFT library.
Key Specifications
Supported Parameters
This model supports the following parameters:
Performance Summary
AlfredPros: CodeLLaMa 7B Instruct Solidity consistently performs among the fastest models and offers highly competitive pricing, ranking in the Infinityth percentile for both speed and cost across eight benchmarks. However, its overall accuracy across various tasks is notably low. In the Hallucinations (Baseline) benchmark, it achieved 30.0% accuracy, placing it in the 7th percentile, indicating a tendency to hallucinate rather than acknowledge uncertainty. Similarly, its performance in Email Classification was 32.0% accuracy (5th percentile), and Reasoning was 2.0% accuracy (4th percentile). The model scored 0.0% accuracy in General Knowledge, Ethics, Mathematics, Instruction Following, and Coding, suggesting significant limitations in these domains. While its speed and cost efficiency are exceptional, the model's primary weakness lies in its very low accuracy across almost all tested categories, making it unsuitable for tasks requiring high precision or factual correctness. Its strength is purely in its operational efficiency rather than its intellectual capability.
Model Pricing
Current Pricing
| Feature | Price (per 1M tokens) |
|---|---|
| Prompt | $0.8 |
| Completion | $1.2 |
Price History
Available Endpoints
| Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
|---|---|---|---|---|
|
Featherless
|
Featherless | alfredpros/codellama-7b-instruct-solidity | 4K | $0.8 / 1M tokens | $1.2 / 1M tokens |
|
Parasail
|
Parasail | alfredpros/codellama-7b-instruct-solidity | 8K | $0.8 / 1M tokens | $1.2 / 1M tokens |
Benchmark Results
| Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
|---|