Reviewed by Aliaksei Rudak, CEO of Lingvanex
At a Glance
- Deepgram is primarily a cloud-based, API-driven speech-to-text platform often used for real-time streaming and fast developer integrations.
- Lingvanex offers both a cloud Speech-to-Text service and an on-premise/offline Speech Recognition deployment model, supporting different infrastructure and compliance needs.
- Teams typically evaluate alternatives when cost predictability, multilingual breadth, or stricter data governance requirements become critical at scale.
- Beyond raw accuracy, decision criteria usually include deployment flexibility, streaming latency and batch throughput, diarization and timestamp support, subtitle outputs, security controls, and pricing transparency.
- The choice often depends on whether the priority is rapid cloud integration or greater infrastructure control and policy alignment.
Bottom line: Deepgram fits cloud-first, streaming-focused environments, while Lingvanex fits when organizations require broader deployment control, multilingual coverage, or more predictable enterprise cost structures.

Disclaimer: This article presents an independent comparative overview of speech-to-text solutions based on publicly available information and typical use cases at the time of writing. Features, pricing, and deployment options may change, so readers should verify current details directly with each vendor.
This content is not legal or regulatory advice.
Deepgram is a popular speech-to-text platform known for its API-driven approach to real-time transcription and batch processing. It is widely used in developer-focused products such as call analytics, voice-enabled apps, and media transcription workflows, with a strong focus on English language recognition.
At the same time, many teams are beginning to seek alternatives to Deepgram as their needs grow. Common reasons include unpredictable usage-based pricing, limited multilingual support, and cloud-only deployment, which may not align with strict data privacy and compliance requirements. These issues become especially noticeable at scale or in regulated industries.
As a result, organizations are increasingly seeking transcription solutions that offer clearer cost modeling, configurable data residency and retention policies, and consistent performance across streaming and batch audio workflows, including diarization, timestamps, and subtitle generation.
Who This Comparison is For
This comparison is designed for decision-makers and technical teams evaluating speech-to-text platforms for production use. By the end of this guide, you will have a clear decision framework to assess whether a cloud-first API model or an on-premise / hybrid deployment better aligns with your security, scalability, multilingual, and cost requirements.
This comparison is particularly relevant for:
- CTOs and Technical Executives defining long-term infrastructure strategy, deployment models, and vendor risk exposure;
- Heads of Engineering responsible for API integration, performance at scale, and system reliability;
- Security and IT Leaders evaluating data residency, encryption standards, and internal infrastructure control;
- Compliance and Risk Officers ensuring alignment with GDPR, HIPAA, SOC 2, and industry-specific regulatory requirements;
- Product Managers embedding speech recognition into SaaS platforms or customer-facing applications;
- Call Center and Operations Leaders optimizing transcription accuracy, speaker diarization, and cost per audio hour at scale;
- Media and Content Teams processing interviews, podcasts, subtitles, and multilingual recordings.
For these audiences, choosing the right Deepgram alternative is not only about transcription accuracy, but also about deployment flexibility, governance control, multilingual stability, predictable cost structures, and long-term operational resilience.
Decision in 60 Seconds
Use this quick matrix to match your primary requirement with the most suitable deployment approach:
- On-premise / offline requirement → Choose full local deployment with no external data transfer.
- Strict data residency rules → Choose on-premise or private infrastructure with location control.
- API-first SaaS integration → Choose a developer-focused cloud API with strong SDK support.
- Low streaming latency for live calls → Choose a cloud platform optimized for real-time pipelines.
- High-volume batch processing → Choose scalable infrastructure (cloud or licensed on-prem, based on data sensitivity).
- Speaker diarization & timestamps → Choose built-in multi-speaker recognition capabilities.
- Subtitle export (SRT, VTT, etc.) → Choose native subtitle generation support.
- Broad multilingual coverage → Choose platforms designed for stable cross-language performance.
- Predictable pricing at scale → Choose fixed tiers or license-based pricing models.
- Enterprise controls & audit logging → Choose solutions with role-based access and compliance features.
This 60-second filter helps narrow direction quickly: prioritize cloud-first models for speed and streaming performance, or on-premise models for governance, privacy, and cost stability.
Deployment Models Explained
When evaluating a speech-to-text platform, the deployment model fundamentally changes how audio is processed, stored, secured, updated, and monitored. The key differences are outlined below:
Key Takeaways
- Cloud deployment processes audio in vendor infrastructure and supports fast API/SDK integration.
- On-prem or offline deployment keeps streaming and batch processing inside your environment, subject to internal retention and audit policies.
- Cloud-first or cloud-only models may limit data residency control for certain regulated or policy-restricted workflows.
Cloud Deployment
In a cloud model, audio data is transmitted to the vendor’s infrastructure for processing.
- Processing: Streaming and batch transcription runs on the provider’s infrastructure via API/SDK access.
- Storage: Temporary or configurable storage in the vendor’s cloud (depending on plan).
- Access Control: Managed through API keys, role-based access control, and enterprise identity integration (e.g., SSO/SAML/OIDC, SCIM provisioning).
- Updates: Model improvements and feature updates are handled automatically by the provider.
- Monitoring:Includes usage logs, request-level telemetry, and SLA-based uptime monitoring, depending on plan configuration.
- Incident Response: Shared responsibility; infrastructure issues are handled by the provider.
Typically used for fast integration, global scalability, and minimal internal infrastructure overhead.
The right model depends on how much control, compliance alignment, and operational responsibility your organization is prepared to manage versus delegate to a vendor.
On-Premise Deployment
In an on-premise model, the speech recognition system runs inside your organization’s infrastructure (data center or private cloud).
- Processing: Performed entirely within your internal network.
- Storage: Audio and transcripts remain under your direct control.
- Access Control: Integrated with internal identity systems (e.g., SSO, RBAC).
- Updates: Controlled and scheduled by your IT team.
- Monitoring: Managed through internal monitoring and logging systems.
- Incident Response: Fully owned by your organization’s IT/security team.
Commonly used in regulated industries and in organizations requiring full infrastructure control.
Offline Deployment
Offline deployment is a specialized form of on-premise setup where the system operates without external internet connectivity.
- Processing: Runs locally on isolated machines or secure environments.
- Storage: No external data transmission; all data stays within the local device or closed network.
- Access Control: Fully governed by internal policies.
- Updates: Performed manually or via controlled update packages.
- Monitoring: Internal-only logging and monitoring.
- Incident Response: Fully internal, with no third-party dependency.
Commonly deployed in high-security environments, defense contexts, confidential R&D, and air-gapped infrastructure.
What to Test Before You Switch
Before migrating from one speech-to-text provider to another, run a controlled evaluation using 7-10 real audio samples from your production environment. Avoid synthetic demos. Use recordings that reflect actual business conditions.
Key Takeaways
- Use real audio with noise, multiple speakers, and domain vocabulary.
- Measure diarization accuracy, timestamp alignment, and API stability under load.
- Validate file limits, concurrency thresholds, and retry behavior in batch workflows.
Step 1: Build a Realistic Test Set
Select audio that includes:
- Background noise (call center noise, traffic, echo, low-quality microphones);
- Overlapping speech (two or more speakers talking simultaneously);
- Multiple speakers (at least 3-5 different voices);
- Strong accents or regional variations;
- Domain-specific vocabulary (product names, legal/medical/technical terms);
- Different audio formats (e.g., WAV, MP3, M4A, video files if applicable);
- Both short and long recordings.
This ensures the evaluation reflects operational reality rather than idealized test conditions.
Step 2: Define Clear Acceptance Criteria
Evaluate each provider against measurable criteria:
1. Transcription Quality
- Accuracy of key entities (names, numbers, product terms, compliance phrases);
- Error rate in critical vocabulary, not only overall readability;
- Stability across different speakers and accents.
- Accuracy of named entities, numeric values, and domain terminology across streaming and batch outputs.
2. Speaker Diarization
- Correct identification and separation of speakers;
- Consistency across long conversations;
- Minimal speaker switching errors.
3. Timestamps
- Accurate alignment between transcript and audio;
- Reliable time markers for subtitle or analytics workflows.
- Alignment precision suitable for subtitle formats (SRT, VTT) and analytics indexing.
4. Punctuation & Formatting
- Logical sentence segmentation;
- Readability without heavy manual correction.
5. Processing Speed
- Real-time factor (how fast audio is processed relative to duration);
- Latency in streaming scenarios.
6. Stability & Reliability
- API consistency (no random failures);
- Performance under repeated batch uploads;
- No unexpected format or size-related errors.
- Consistent API response behavior under concurrency and retry scenarios.
Step 3: Make the Decision
Do not choose based on demo accuracy alone.
Select the provider that:
- Minimizes errors in business-critical terminology;
- Maintains diarization quality in real conversations;
- Delivers predictable performance under load;
- Meets your required speed threshold;
- Aligns with your compliance and deployment constraints.
A structured test on your own audio often reveals differences that marketing benchmarks do not show, especially in noisy, multilingual, or domain-specific environments.
Pricing and TCO at Scale
At scale, speech-to-text costs are driven less by the advertised price per minute and more by operational reality. A proper TCO model should combine usage, infrastructure, compliance, and internal overhead into one structured calculation.
You can model total cost of ownership as: TCO = Usage Cost + Infrastructure Cost + Compliance Cost + Operational Cost
Where each component depends on your deployment model.
Key Takeaways
- Streaming peaks and batch throughput directly affect billing or hardware sizing.
- Retention duration and audit logs increase storage and compliance costs.
- SSO/SCIM, SLA tiers, and support models add operational expense.
1. Usage Cost (Cloud or Hybrid)
Driven primarily by:
- Total audio hours per month;
- Peak real-time streaming concurrency;
- Advanced features (diarization, formatting, analytics);
- Language mix and model selection;
- Feature-dependent pricing for diarization, punctuation formatting, subtitle export, and model configuration.
In usage-based cloud pricing, volatility typically comes from streaming peaks and unexpected volume growth rather than average monthly hours.
2. Infrastructure Cost (Primarily On-Prem)
Relevant if you deploy locally or in private cloud:
- Servers (CPU/GPU capacity sized for peak load);
- Storage for audio and transcripts;
- Redundancy and backup systems;
- Upgrade cycles over time.
On-prem typically requires higher upfront investment but may reduce marginal cost per hour at high, stable volumes.
3. Compliance & Retention Cost
Applies to both models:
- Required retention period for audio and transcripts;
- Audit logging and monitoring systems;
- Identity integration and access control;
- Regulatory documentation and reporting;
- Retention duration configuration, audit log storage, and export requirements for compliance review.
Retention policies alone can significantly affect storage and monitoring expenses.
4. Operational Cost
Often underestimated, this includes:
- Engineering integration time;
- Ongoing system monitoring;
- Admin and DevOps support;
- Vendor support tiers or SLA guarantees;
- Incident handling and downtime impact.
In enterprise environments, internal labor cost can rival infrastructure cost.
Practical Insight
Cloud pricing is often more flexible at lower or unpredictable volumes.
On-premise models tend to become financially attractive when volumes are high, steady, and compliance requirements are strict.
A realistic TCO comparison should be based on your actual workload profile, not on headline per-minute pricing.
Procurement Checklist
Before selecting or switching a speech-to-text provider, align procurement, IT, security, and product stakeholders around a structured vendor review. The questions below help uncover operational, compliance, and long-term risk factors that are not always visible in marketing materials.
Data Processing & Location
- Where exactly is audio processed (region, cloud provider, on-prem, hybrid)?
- Can data residency be restricted to a specific country or region?
- Is any audio stored after processing? If yes, for how long and where?
- Are transcripts stored separately from audio, and under what policy?
Access & Security
- Who can access customer audio and transcripts (vendor staff, subprocessors)?
- Is access role-based and logged internally?
- Are encryption standards applied in transit and at rest?
- Does the system support SSO (SAML/OIDC) and SCIM provisioning?
- Can access be restricted via IP allowlists or private networking?
Retention, Export & Deletion
- What retention options are available (configurable, zero-retention, custom policies)?
- How is permanent deletion handled, and what is the deletion timeline?
- Is bulk export of transcripts and metadata supported?
- Are audit logs exportable for compliance reviews?
Compliance & Auditability
- Which certifications are in place (e.g., SOC 2 Type I/II, ISO standards)?
- Are audit logs available for user activity and administrative actions?
- Is a Data Processing Agreement (DPA) provided?
- How are subprocessors disclosed and updated?
Reliability & SLA
- What uptime SLA is contractually guaranteed?
- Are there defined response times for critical incidents?
- Is premium or dedicated support available?
- How are outages communicated and documented?
Technical Constraints
- What are the supported file formats and size limits?
- Are there limits on streaming concurrency or batch throughput?
- Are advanced features (diarization, timestamps, subtitles) included or billed separately?
Update & Lifecycle Policy (Critical for On-Prem)
- How are model updates delivered (automatic vs manual deployment)?
- Can updates be postponed or validated in staging environments?
- What is the vendor’s long-term support policy for older versions?
- How are security patches distributed and documented?
What to Look for in a Deepgram Alternative
When evaluating a Deepgram alternative, it is important to look beyond basic speech recognition accuracy. The right solution should align with technical, security, and regulatory requirements.
Transcription Accuracy in Real-World Audio
An effective alternative should deliver consistent transcription results in high-noise environments, during speech overlap, across accents, and with domain-specific vocabulary without requiring excessive manual correction or complex retraining.
Beyond general accuracy, assess:
- Speaker diarization quality (correct speaker separation in multi-party calls);
- Timestamp precision (alignment accuracy for analytics and subtitles);
- Named entity stability (proper handling of numbers, product names, IDs, medical/legal terms);
- Consistency across long recordings (no degradation after 30-60+ minutes);
Accuracy should be measured not only by overall readability but by performance on business-critical terminology.
Real-Time and Batch Transcription Support
The platform should support both streaming transcription of real-time conversations and batch processing of recorded audio, providing flexible use in calls, meetings, and media-rich workflows.
For real-time streaming, evaluate:
- Supported concurrent streams;
- End-to-end latency under peak load;
- Stability during long sessions;
- Behavior under network fluctuations.
For batch processing, verify:
- File size and duration limits;
- Parallel upload limits;
- Throughput under large archives;
- Queue handling and retry mechanisms.
Streaming reliability and batch scalability are often where production differences emerge.
Language Coverage and Multilingual Support
An effective alternative must provide broad multilingual coverage with consistent performance across regions.
Beyond the number of supported languages, check:
- Stability in smaller or less common languages;
- Accent and dialect handling;
- Code-switching performance (multiple languages in one recording);
- Availability of punctuation and formatting across languages.
Multilingual maturity matters more than headline language count.
On-Premise and Private Deployment Options
For many enterprises, on-premise or private cloud deployment is critical.
Clarify:
- Whether full local processing is supported;
- Internet connectivity requirements (fully offline vs connected private cloud);
- Update control (automatic vs scheduled manual updates);
- Infrastructure sizing for peak concurrency.
Deployment flexibility directly impacts compliance and cost modeling.
Data Security and Regulatory Compliance
The solution must align with GDPR, HIPAA, and corporate security standards.
Specifically verify:
- Encryption in transit and at rest;
- Access control (RBAC, SSO, SCIM support);
- Audit logging capabilities;
- Data retention configuration options;
- Support for retention controls, export/delete procedures, audit logs, and identity federation (SSO/SCIM).
- Explicit deletion and export procedures.
Security features must be operationally enforceable, not only contractually stated.
Compliance with International Standards
Many organizations require adherence to recognized security and quality frameworks (e.g., ISO-based systems, SOC reports).
Ensure:
- Auditability of system behavior;
- Documentation transparency;
- Subprocessor disclosure;
- Long-term support policies for enterprise deployments.
Compliance alignment reduces vendor risk over time.
Pricing Transparency and Cost Predictability
A viable alternative must offer pricing that scales without hidden volatility.
Assess:
- Impact of peak streaming concurrency;
- Charges for advanced features (diarization, formatting, subtitles);
- Storage or retention-related costs;
- Overage policies;
- License-based options for high, stable volumes.
At scale, predictability often matters more than headline price.
API Quality and Ease of Integration
Well-documented APIs and SDKs are essential for production integration.
In addition to documentation, verify:
- Clear rate limits;
- Stable versioning policy;
- Error handling transparency;
- Webhooks or callback mechanisms;
- SDK availability, rate limit transparency, webhook support, streaming concurrency limits, and versioning policy.
- Support for subtitle outputs (SRT, VTT, ASS, etc.).
Integration quality directly affects engineering time and long-term maintenance burden.
Quick Overview: Deepgram Strengths and Limitations
Deepgram is a widely used speech-to-text platform focused on developer-friendly APIs and scalable cloud-based transcription. It is commonly adopted in products that require real-time speech recognition and integration into existing software systems.
Strengths of Deepgram
- Fast transcription speeds with performance optimized for real-time streaming scenarios;
- API-first architecture suitable for developers who require programmatic access;
- Flexible integration options for embedding speech-to-text into applications;
- Well-suited for high-volume English-language transcription;
- Commonly used in call analytics and voice-driven applications.
Limitations of Deepgram
- Primarily cloud-oriented deployment model, with private or self-hosted options typically available in higher-tier or enterprise plans.
- Multilingual coverage may vary in depth and performance depending on language and model selection.
- Usage-based pricing structure can introduce cost variability as audio volumes or streaming concurrency increase.
- Advanced features and enterprise controls may require higher-tier plans or custom agreements.
- Cost efficiency may depend heavily on workload profile, particularly for organizations processing consistently large audio volumes.
Lingvanex vs. Deepgram: Feature-by-Feature Comparison
Lingvanex is a speech and language technology provider offering Speech-to-Text in two delivery models: a cloud API for fast transcription workflows and an on-premise/offline deployment for organizations that require strict data control, internal processing, and compliance-driven infrastructure. Lingvanex is commonly evaluated by teams that need broad multilingual coverage, predictable enterprise deployment options, and the ability to keep audio and transcripts inside their own environment.
Deepgram is an API-first speech-to-text platform widely associated with real-time streaming transcription and developer-focused integrations. It is widely used in products like call analytics, voice-enabled applications, and media transcription pipelines, particularly in scenarios where cloud-based scalability and fast time-to-integration are the priority.
We compare them because they represent two common approaches to speech recognition used by modern product teams: a cloud-first, API-driven transcription platform optimized for streaming and quick integration (Deepgram), and a cloud plus on-premise option designed for teams that require greater control over data handling, compliance alignment, and multilingual support across regions (Lingvanex).
For many buyers, the choice is not only about accuracy, it’s about deployment flexibility, data governance, language coverage, and cost predictability at scale.
Note: Since features and plans can change, treat the table as a high-level snapshot and validate any must-have requirement directly with the vendor.
| Criteria | Lingvanex | Deepgram |
|---|---|---|
| Deployment Options | Lingvanex On-premise Speech Recognition – for high privacy/confidentiality requirements, full control over data. Lingvanex Speech-to-Text – cloud version, for cases without strict privacy needs. | Cloud-based by default; on-premise available only in Enterprise plans. |
| Multilingual Support | Supports 90+ languages | Supports 45+ languages |
| Supported Audio Formats | M4A, MP3, OGG, WAV, WMA, and more. | Supports a wide range of audio formats, including MP3, WAV, FLAC, OGG, and more; file size limits depend on plan and API configuration. |
| Real-Time & Batch Transcription | Offers both real-time voice recognition and pre-recorded audio transcription. | Offers real-time streaming and batch transcription through API. |
| Speaker Diarization | Supports multi-speaker diarization with speaker labeling and timestamp alignment. | Available (part of rich formatting features). |
| Automatic Punctuation | Included, improves transcript readability. | Available via smart formatting options. |
| Language Diversity / Accent Handling | Coverage and accent handling depend on the selected language, model configuration, and deployment settings. | Coverage and accent handling depend on the selected language, model configuration, and deployment settings. |
| Data Privacy & Compliance | Data handling depends on the selected deployment model: cloud deployments process audio within the provider’s infrastructure, while on-premise or offline configurations can enable local processing within the client’s environment, subject to system setup and policies. | Cloud-first or cloud-only processing models may limit direct data residency control for organizations operating under regulated workflows or internal security policies. |
| Pricing Model | Lingvanex Speech-to-Text has its own subscription tiers for cloud users. On-premise and enterprise deployments use license-based or custom pricing. | Deepgram offers multiple pricing options including Free credits, annual pre-paid plans, and custom pricing for large volumes or enterprise requirements. |
| Customization & Integration | APIs and SDKs available, customizable models for domain-specific terminology. | Flexible API design for developers with plugin and integration support. |
| Enterprise Readiness | Supports security controls, private deployment options, and multilingual configuration. | Cloud-first by default; on-premise and private cloud available in Enterprise/self-hosted plans. |
Lingvanex: Deployment Options and Where They Fit
Lingvanex offers two speech recognition solutions designed to meet different organizational needs: the cloud service Lingvanex Speech-to-Text (Cloud) and the on-premise solution Lingvanex On-Premise Speech Recognition. Below are the key features of each option.
Lingvanex Speech-to-Text (Cloud)
- Designed for fast transcription of interviews, calls, and recordings in environments where full on-premise infrastructure control is not required.
- Supports audio files up to 75 MB in formats including M4A, MP3, OGG, WAV, and WMA.
- Processes both live streaming and pre-recorded audio.
- Available under subscription-based plans aligned with different usage levels and organizational needs.
Lingvanex On-Premise Speech Recognition
- Deployment model intended for environments with elevated data security and infrastructure control requirements.
- Runs within the client’s infrastructure, including desktop systems (Mac OS, Windows) and controlled enterprise environments.
- All speech recognition processing can occur locally, without external data transfer when configured in offline mode.
- Designed for scenarios with increased regulatory, confidentiality, or internal governance expectations, depending on deployment configuration.
- Supports domain-specific customization of terminology and recognition models.
- Includes configurable access control, audit logging, and data residency options within the client’s infrastructure.
- Commonly considered by regulated industries and organizations operating under strict internal security policies.
- Supports audio and video formats such as WAV, WMA, MP3, OGG, M4A, FLV, AVI, MP4, MOV, and MKV.
- File size handling depends on infrastructure capacity and system configuration.
- Can be integrated with Lingvanex On-premise Machine Translation for multilingual workflows.
- Supports real-time transcription and domain-specific configuration options.
- Enables subtitle generation in formats including SRT, VTT, ASS, SSA, and SUB.
- User capacity and workload limits depend on hardware sizing and licensing configuration.
- Cost structure depends on licensing model, infrastructure design, and workload characteristics.
Shared Features (Cloud and On-Premise)
- Built on neural network-based speech recognition technology.
- Supports speaker diarization and automatic time-stamping.
- Provides coverage across 90+ languages.
- Produces structured transcripts suitable for live and pre-recorded audio workflows.
Selection & Migration Framework
Choosing and implementing a speech-to-text provider should follow a structured process — from defining requirements to controlled rollout.
1. Define Requirements
Clarify language coverage, streaming vs batch workloads, latency expectations, privacy obligations, file formats, and projected volume growth. This establishes the baseline for both evaluation and long-term cost modeling.
2. Evaluate Deployment Model
Decide whether cloud, private cloud, on-premise, or offline deployment aligns with your internal IT policies, regulatory constraints, and data governance standards.
3. Assess Core Features
Review diarization quality, timestamp accuracy, subtitle generation support, multilingual coverage, customization options, and integration capabilities.
4. Test with Real Audio
Run a pilot using production-like audio samples. Measure performance under realistic conditions, including noise, multiple speakers, and domain-specific vocabulary.
5. Model Cost at Scale
Estimate total cost of ownership based on audio volume, peak concurrency, feature usage, infrastructure requirements (if on-prem), and support expectations.
6. Validate Integrations & Governance
Ensure compatibility with authentication systems, logging infrastructure, retention policies, and compliance workflows before exposing live traffic.
7. Phased Rollout
Introduce the new provider gradually, monitor accuracy and performance under load, and scale incrementally to reduce operational risk.
Conclusion: Choosing a Deepgram Alternative in 2026
When selecting a speech-to-text provider, the decision typically depends on a few core conditions:
- Choose a cloud-first, API-driven platform when rapid integration and real-time streaming performance are the primary priorities.
- Choose an on-premise or private deployment model when data control, infrastructure ownership, or regulatory alignment are central requirements.
- Prioritize broad multilingual coverage when supporting multiple regions, accents, or cross-border operations.
- Focus on cost predictability at scale when processing stable, high audio volumes over time.
- Evaluate enterprise controls and governance features when auditability, access management, and retention policies are critical.
The right choice depends on how your organization balances deployment flexibility, compliance requirements, scalability, and long-term cost structure.
About the Reviewer
Aliaksei Rudak, CEO of Lingvanex, is a seasoned expert in machine translation and data processing with +15 years of experience in the IT industry. Beginning his career as an iOS developer, he now oversees the design and delivery of Enterprise-MT solutions, ensuring their scalability, security, and seamless integration with complex enterprise infrastructures.



