Author's Description
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Mancer: Weaver (alpha) demonstrates exceptional performance in terms of operational efficiency, consistently ranking among the fastest models available and offering highly competitive pricing across all benchmarks. This makes it a cost-effective and rapid solution for deployment. However, the model exhibits significant limitations in its cognitive and analytical capabilities. Its accuracy across all benchmark categories is extremely low, with scores ranging from 0.0% in Ethics, Mathematics, and Email Classification to a mere 4.0% in Instruction Following and Reasoning. General Knowledge is also very poor at 0.5%. These results indicate that Mancer: Weaver (alpha) struggles with fundamental tasks requiring factual recall, logical deduction, ethical judgment, and adherence to complex instructions. While its speed and cost efficiency are notable strengths, its current accuracy levels suggest it is not suitable for applications requiring reliable or coherent output, particularly in domains demanding precision or deep understanding. Its intended use in roleplay/narrative situations aligns with its described limitations regarding coherence and memory.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $1.13 |
Completion | $1.13 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Mancer 2
|
Mancer 2 | mancer/weaver | 8K | $1.13 / 1M tokens | $1.13 / 1M tokens |
Mancer 2
|
Mancer 2 | mancer/weaver | 8K | $1.13 / 1M tokens | $1.13 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
---|