Similar Items: Adapting Large Language Models to a Low-Resource Agglutinative Language: A Comparative Study of LoRA and QLoRA for Bashkir
- MatryoshkaLoRA: Learning Accurate Hierarchical Low-Rank Representations for LLM Fine-Tuning
- FAST-LoRa: An Efficient Simulation Framework for Evaluating LoRaWAN Networks and Transmission Parameter Strategies
- LLM QLoRA Fine-Tuning of Llama, DeepSeek, and Qwen: A Skyrim Case Study
- Scalable Parallel Transmission in LoRa Networks: A Regulatory-Aware Analysis
- Parameter-Efficient Few-Shot Sentiment Analysis Using LoRA-Enhanced Transformers
- Localization in Multi-Story Environments Using a Multi-Loss Propagation Model in LoRa Networks With Limited Site Data