fbpx

GenAI Security: Striking the Right Balance Between Innovation and Protection

  • Technical Posts

Generative AI (GenAI), particularly large language models (LLMs), are reshaping how we work and transforming industries. From enhancing productivity in day-to-day operations to powering transformative customer-facing features, the momentum toward adopting AI-driven technologies is strong—and growing stronger.

But a great opportunity comes with great responsibility. Thanks to GenAI, productivity has improved, but it also comes with new security challenges that traditional software systems were never designed to handle. At dotData, where we build enterprise-grade AI solutions, we constantly face the challenge of balancing innovation (“offense”) with robust governance and protection (“defense”).

In this post, we’ll explore why LLM security is uniquely difficult, what organizations can learn from the OWASP Top 10 for LLM Applications, and how dotData is tackling these security challenges across both product and organizational levels.

Why LLM Security is Uniquely Challenging

Traditional software systems operate under predictable rules: for the same input, you get the same output. Determinism allows for well-defined and testable security models. If unexpected behavior emerges, it’s typically considered a bug.

LLMs, by contrast, behave more like humans. They generate different outputs even with the same input, influenced by factors like AI model architecture, context, temperature settings, or even prompt history. This non-deterministic nature makes input/output control far more difficult — and poses potential threats such as prompt injection, output leakage, and model exploitation. Without proper security posture, it can result in critical vulnerabilities and serious consequences.

Foundational Security Principles Still Apply

Despite these differences, many GenAI security best practices remain relevant:

  • Input validation to prevent malicious codes and prompts
  • Output filtering to avoid sensitive data leakage and improve data security
  • Supply chain risk management for plugins and model components

However, what’s new is the scale and unpredictability of risk. That’s where modern AI security frameworks like OWASP’s Top 10 for LLM and Generative AI Applications come into play.

A Modern Playbook: OWASP Top 10 for LLM Applications

The OWASP Top 10 for LLM and Generative AI Applications (updated in late 2024 for 2025) provides a practical, evolving framework for risk identification and cybersecurity threat mitigation unique to generative AI systems.

Notably, the 2025 edition introduces two new threats:

#7: System Prompt Leakage

Unlike user prompts, system prompts define model behavior under the hood. If these contain credentials or sensitive data, the risk posed by these data leakages is severe. Clear separation and redaction are essential for data protection.

#8: Embedding & Vectorization Vulnerabilities

Particularly relevant for Retrieval-Augmented Generation (RAG), this includes risks like vector database poisoning or embedding manipulation that may allow adversaries to “trick” the model into revealing unintended content.

In addition to these new entries, terminology updates better reflect real-world issues. For example, “Insecure Output Handling” has been refined to “Improper Output Handling,” acknowledging the complexity of context-aware generative behavior.

These changes reflect how quickly real-world implementation of LLMs is maturing and why continuous monitoring and risk reassessment in security operations is necessary for threat detection and reinforcement of Generative AI security.

The Evolving LLM Landscape: Innovation Meets Regulation

The past year has seen rapid LLM innovation: multi-modal models (text, image, code), lighter and faster models optimized for production, and expanding open-source ecosystems (e.g., LLaMA, DeepSeek).

Yet this rapid progress comes with emerging threats. Prompt injection vulnerabilities have been observed in tools like Slack AI and Microsoft Copilot. Public security concerns around data leakage, such as early reports involving DeepSeek, underscore the reputational risks tied to AI tool usage.

On the regulatory front:

  • EU AI Act: Stricter compliance regime under implementation
  • United States: A more fragmented, state-driven, and relatively lenient approach
  • Global Enterprises: Caught between innovation demands and regulatory compliance

This landscape creates complexity in security controls for enterprises managing global deployments.

dotData’s Approach to Secure Generative AI Risks

At dotData, we serve data-driven enterprises. Trust is non-negotiable, so security is embedded into our product development lifecycle across technical, organizational, and governance dimensions.

Key Security Measures We Implement

  • SOC 2 Type II compliance, secure endpoint and identity management, and ongoing employee security training
  • Generative AI features are optional, not mandatory, so customers with stricter governance can still use our platform
  • Transparent LLM data usage: Our documentation specifies what data is sent to LLMs and under what circumstances
  • Comprehensive assessments based on OWASP LLM guidelines: We evaluate risk across user inputs, model behavior, and downstream usage

Deployment Flexibility for Security Needs

To accommodate varying customer needs, we offer:

  • Private connectivity with AWS Bedrock
    For example, when deployed in AWS Tokyo, customers can use Bedrock’s Claude models via VPC-only communication, keeping data local and secure.
  • On-premises support with open models
    We support customers using local open-source LLMs (e.g., via vLLM and LLaMA) entirely within their own infrastructure, eliminating external data exposure.

These GenAI security measures allow organizations across industries—including finance and government—to adopt generative AI on their own terms.

Organizational Challenges: Balancing Speed and Control

One of the biggest organizational dilemmas is the balance between AI innovation speed and security assessment processes. Many large enterprises have robust security assessments for software onboarding — often taking weeks or months. This can hinder innovation.

A process that is too flexible increases risks. But if too rigid, it can limit progress. The answer lies in graduated risk management:

  • Stage-based access: Use low-sensitivity data in early trials, move to sensitive data only after full vetting
  • Information classification: Distinguish between boilerplate code and proprietary logic when using AI coding tools
  • Environment separation: Ensure production data is isolated from development or sandbox environments

This structured, risk-based approach allows organizations to test and deploy new GenAI capabilities without waiting for a “perfect” security framework.

Final Thoughts: Offensive Innovation, Defensive Confidence

GenAI brings exciting opportunities for automation, decision support, and service innovation. But as GenAI capabilities grow, so do risks.

Emerging technologies like Model Context Protocol (MCP)—which allow LLMs to interact with external tools—will require new trust and evaluation frameworks. Much like open-source libraries, plugins, and tools that interface with LLMs need scrutiny. But evaluating trust in an AI that can hallucinate adds a new layer of complexity.

Ultimately, zero AI risk is impossible. Organizations must develop an overall security posture of informed risk acceptance, grounded in robust AI governance, but still flexible enough to allow AI development and transformation.

At dotData, we believe that the future of AI lies not in choosing between “move fast” and “stay safe,” but in mastering both intelligently and responsibly.

Takumi Sakamoto, VP of Engineering
Takumi Sakamoto, VP of Engineering

Takumi is VP of Engineering at dotData. He leads all engineering efforts, including the development and operations of dotData Cloud, product support, and security. Prior to joining dotData, he served as VP of Engineering at Kaizen Platform. He also held key roles at SmartNews as both a Site Reliability Engineer and Data Engineer, where he helped establish the SRE team and data infrastructure, and at DeNA as an infrastructure engineer responsible for operating large-scale web services. With a career spanning major tech companies and startups, Takumi brings deep technical expertise and a strong track record of solving complex engineering challenges and driving successful projects.

dotData's AI Platform

dotData Feature Factory Boosting ML Accuracy through Feature Discovery

dotData Feature Factory provides data scientists to develop curated features by turning data processing know-how into reusable assets. It enables the discovery of hidden patterns in data through algorithms within a feature space built around data, improving the speed and efficiency of feature discovery while enhancing reusability, reproducibility, collaboration among experts, and the quality and transparency of the process. dotData Feature Factory strengthens all data applications, including machine learning model predictions, data visualization through business intelligence (BI), and marketing automation.

dotData Enterprise No-Code Automated Feature Engineering and ML

dotData Enterprise is an AI platform that enables data analysis teams to develop predictive AI models without coding. Through automated feature engineering and machine learning (AutoML), dotData Enterprise provides a one-stop solution for AI development—from extracting features from business data to building predictive models using machine learning—without requiring specialized knowledge or coding skills. With dotData Enterprise, predictive analytics projects can be completed in days rather than months, allowing businesses to quickly harness the power of AI and gain valuable future predictions and insights from their data.