Skip to content
Code
Ryan MooreOctober 31, 202412 min read

Streamlining Data Insights: Developing Streamlit Apps Inside Snowflake

In today’s data-driven world, companies rely on cloud platforms like Snowflake to manage and analyze vast amounts of information efficiently. Snowflake’s robust architecture has made it a popular choice for organizations needing scalable, secure, high-performing data solutions. On the other hand, Streamlit, a startup popular for its open-source project for building data-based apps, emerged as a go-to framework for rapidly building interactive web apps using Python — enabling users to visualize and interact with data in real time. Since the acquisition of Streamlit by Snowflake in 2022, it became possible to build Streamlit applications right inside of a Snowflake instance, making use of the scalable Snowflake processing resources. By integrating Streamlit directly with Snowflake, data teams unlock even greater value —combining powerful data processing with user-friendly app interfaces to enhance decision-making and streamline workflows. If you haven’t tried Streamlit applications inside your Snowflake instance yet, we have summarized some key features and steps to help you get started.

Streamlit Features That Simplify Your Processes

Streamlit is an open-source Python framework designed to simplify the process of building interactive web applications, particularly for data science and machine learning projects. It allows developers to turn data scripts into fully functional web apps with just a few lines of code, making it highly accessible to anyone familiar with Python.

Streamlit’s Key Features:

  • Interactive dashboards. Streamlit makes it easy to create interactive dashboards where users can filter data, explore charts, and interact with widgets like sliders, dropdowns, and buttons.
  • Ease of use with Python. Since it’s built for Python, Streamlit integrates seamlessly with popular libraries like Pandas, NumPy, and Matplotlib. There’s no need to learn web development languages like HTML, CSS, or JavaScript.
  • Instant updates.  Streamlit apps update automatically whenever the underlying Python code is modified, providing a real-time development experience without the need for manual refreshes or deployment steps.

Streamlit’s simplicity and flexibility make it an ideal tool for quickly building web applications. Whether you’re a data scientist looking to showcase your model results or a developer creating an interactive analytics dashboard, Streamlit provides a fast, intuitive way to turn your code into a shareable, browser-based app without any web development expertise.

Using Streamlit Inside Snowflake Is a Bright Idea

Combining Streamlit with Snowflake opens up a new level of efficiency and capability for building data-driven applications.  Integrating these two powerful tools can significantly enhance your data workflows. At Snow Fox Data, we’ve helped organizations take advantage of the lightweight nature of Streamlit to build reporting and data entry applications within their Snowflake environment. Additional features that make Streamlit in Snowflake a good platform for application development include:

  • Enhanced Analytics. By using Streamlit inside Snowflake, you can perform real-time data querying directly from the Snowflake platform. This means that users can interact with up-to-date data through a web app without needing complex pipelines or manual refreshes. Whether you’re analyzing large datasets or monitoring key performance metrics, Streamlit allows you to create apps that provide instant insights from your Snowflake data.
  • Ease of Deployment. Integrating Streamlit with Snowflake is incredibly straightforward, allowing you to build and deploy applications without complicated infrastructure setups. You don’t need to worry about managing servers or orchestrating multiple layers of technology—Streamlit’s ease of use, combined with Snowflake’s cloud-native platform, ensures that you can go from idea to fully functioning app quickly.
  • Data Security. When you connect Streamlit directly to Snowflake, you maintain data security by keeping all sensitive information within the Snowflake environment. There’s no need to move data to external platforms for analysis or visualization. This approach not only reduces the risk of data breaches but also ensures that your applications comply with organizational and regulatory security standards.
  • Scalability. Snowflake’s auto-scaling capabilities make it the perfect match for Streamlit applications. As your data grows, or as more users interact with your app, Snowflake automatically adjusts its resources to ensure smooth performance. Meanwhile, Streamlit’s lightweight framework ensures that the front-end remains user-friendly, offering an interactive interface that scales seamlessly alongside Snowflake’s back-end infrastructure.

By combining the strengths of Streamlit and Snowflake, organizations can enhance their analytics workflows, improve security, and deploy scalable solutions with minimal effort.

Setting up a Streamlit Application in Snowflake

Getting started creating a Streamlit app in your Snowflake account is a very straightforward process. In the left-hand navigation, under the “Projects” menu, you’ll find a link for “Streamlit” which will take you to the “Streamlit Apps” page. 
streamlit and snowflake 1

Once you have created applications, you’ll find them listed here. We’ll get started with this example by clicking the + Streamlit App button to open the new app creation dialog, as shown below. In this dialog, it’s required to specify an Application title, database, schema and warehouse from your Snowflake resources.

Note: It’s possible to create a new Streamlit application using Snowflake SQL commands or the Snowflake CLI as outlined here.

streamlit snowflake 2

After selecting a viable location and naming your application, clicking Create will start up a new Streamlit application running using the resources of your warehouse, with some default Python application code as shown below. 

streamlit snowflake 3

You’ll notice that the default development environment for Streamlit applications in Snowflake is a split-screen view, allowing you to edit the Python Streamlit code in the left pane while running the application live in the right pane. Making a change to the code in the editor just requires a click of the Run button to execute. You can try this out by modifying line 6 where the Streamlit code is generating an HTML title element. After clicking Run, the application’s title will update immediately. This setup can provide for a very fluid development experience, but It’s important to note that it’s not possible to use a more advanced external editor (or tools like source control) for development at this time.

Connecting to Snowflake Data

The out-of-the-box sample code provides a great introduction to the possibilities of Streamlit application development, but a much more interesting use-case is to utilize the low-code capabilities of Streamlit to directly access and display Snowflake data from your environment. The key method to make this possible is the Snowpark Python API’s get_active_session method which returns a session object to interact with the database, schema and warehouse associated with this application instance. More information on the Snowflake session can be found here.

In order to demonstrate this capability, we’ll delete the template application code and start fresh with the code below, which imports the needed Python libraries, establishes a session, and writes an HTML header element to the user interface. You’ll notice in this code that, although it’s a good practice to organize your code by functionality, it's also possible to mix data manipulation, logic, and user interface code, which can lead to a very hard-to-maintain codebase.

# Import python packages
import streamlit as st
from snowflake.snowpark.context import get_active_session
from snowflake.snowpark.functions import col

# Create an HTML Header
st.header("Customer Query Streamlit Demo", divider=True)

# Get the current Snowpark session
session = get_active_session()

Next, we’ll add code to and execute a SQL query against a table in our Snowflake instance. For this example, we’re going to use the out-of-the-box sample database which has millions of records to work with. In order to execute an arbitrary query, we’ll use the Snowark session.sql method which returns a Snowpark Dataframe, and we’ll immediately convert that Dataframe to a Pandas dataframe using the to_pandas method. The majority of the Streamlit user interface elements will accept Pandas dataframes as their data source- which is what we’re going to use this dataframe for.

Much of the Streamlit-specific documentation is provided outside of the Snowflake documentation. For example, a full list of Streamlit UI elements can be found here.

market_query = '''
SELECT DISTINCT(C_MKTSEGMENT) FROM 
SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER
'''
market_df = session.sql(market_query).to_pandas()

In the next section of code, we’ll create a Streamlit selectbox from this dataframe by providing it as the “options” parameter in the constructor. This code, combined with the SQL query above will create an HTML <SELECT> element with all of the distinct “Market Segments” from the sample CUSTOMER table as listed options. 

sel_mkt = st.selectbox('Select Market', options=market_df)

In the next block, we’re going to write code to react to the event that a user selects one of the options generated above. If you’re familiar with application development, this block may seem a bit counterintuitive since we’re not specifically responding to the “change” event - but that’s part of the magic of Streamlit - by default, the code is re-run with every user interaction (more on this topic below). It is possible to avoid and optimize this behavior using methods like st.form, but we’ll stick with the basics in this example.

if sel_mkt is not None:
    table = 'SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER'
    df = session.table(table).filter(
        col('C_MKTSEGMENT') == sel_mkt).to_pandas()

    st.dataframe(df[['C_MKTSEGMENT', 'C_NAME', 'C_ACCTBAL']])

    total_balance = df['C_ACCTBAL'].sum()
    st.subheader(f'TOTAL ACCT_BAL: ${total_balance:,.2f}')

In the first line above, we’re checking to see if the user has selected an option - which will be the value of the “sel_mkt” variable. If this variable is set, then we’re creating another Pandas dataframe object using an alternative method for accessing data (different from the SQL query shown previously). This approach takes advantage of the Snowpark table.filter method along with the col object to perform a dynamic query. Once we have the resulting dataframe, we are creating two new UI elements to display the results. The first of these is the dataframe which creates an interactive table from a Pandas dataframe, and then a subheader element to display a total account balance.

The result of this code is shown in the screenshot below. This application allows the user to to select a “Market Segment” from the select box, and then dynamically displays all Customer records from the “CUSTOMER” table, along with their summarized account balance. As you can see, this is a very small amount of code to write in order to generate an interactive application that connects directly and securely to your Snowflake data.

streamlit snowflake 4

Sharing With Others

Sharing your Streamlit applications within Snowflake offers a streamlined and efficient way to bring these data-driven applications directly to other users in your organization in a secure, centralized environment. With permissions and user roles already configured within Snowflake, Streamlit apps inherit the same security protocols, ensuring that data access remains controlled and compliant. This approach enhances data accessibility and encourages collaboration across teams, making it easier for non-technical stakeholders to interact with data insights and drive informed decisions.

Sharing our application from the Snowflake Streamlit UI is as easy as clicking on the Share button in the upper-right corner of the screen. The resulting dialog provides a generated link to use to share the application, as well as the ability to modify the roles of those who are granted access.  Any changes you make to the application while others are using it will be available to shared users in real-time, there is currently no concept of “environments” to gate changes. 

streamlit snowflake 5

Understanding the Streamlit Application

 

The Streamlit Application Model

Although brief, the code written above gives an applicable example of application development in Streamlit. As your future applications expand, however, performance will likely become a big consideration. To overcome this, it is very important to understand how Streamlit applications run and cache data. Streamlit’s framework is designed to re-run the entire script you’ve written from top to bottom with each user interaction. This behavior makes it highly reactive but requires some specific strategies to optimize performance and maintain session state.

Here’s a breakdown of how Streamlit’s framework operates and how to manage it:

  1. Reactivity Model
    As we saw in our demo application, every time a user interacts with an element, like a button or select box, Streamlit re-runs the entire script. This is similar to how a Jupyter notebook cell might be re-executed in response to new inputs. By re-running from top to bottom, Streamlit can ensure that all variables and elements reflect the latest input state, allowing it to re-render the app in sync with any user changes.
  2. Session State
    Since the entire code is re-run, Streamlit doesn’t store variable values between interactions by default. To address this, Streamlit introduced the st.session_state object, which allows developers to save values across reruns. This enables more control over persistent data so users can maintain context across interactions without re-initializing certain values. More detailed information on session state can be found here.
  3. Data Caching
    For performance optimization, Streamlit offers caching through the @st.cache_data decorator for data or @st.cache_resource for resource-heavy objects like models. By caching data that doesn’t change often—such as large datasets, database queries, or ML models—developers can prevent costly re-computations on each interaction. Cached functions are skipped during re-runs, making the app more responsive. More information about caching can be found here.
  4. Control Flow With Conditionals
    Since the whole script re-runs, it’s essential to use conditionals wisely to control flow based on st.session_state or cached variables. This way, you can avoid re-running certain blocks of code that don’t need to change, further optimizing app performance.

In summary, this rerun model helps Streamlit ensure interactivity and simplicity but does require developers to be very aware of state management, caching, and flow control to achieve a responsive, efficient user experience.

Streamlit and Virtual Warehouses

Streamlit in Snowflake requires a virtual warehouse to run a Streamlit app and to perform SQL queries. To run a Streamlit app, you must select a single virtual warehouse to run both the app itself and its queries. This warehouse remains active while the application is active in a user’s browser.  The default timeout for this activity is 15 minutes.

To conserve credits, it is possible to auto-suspend the virtual warehouse or create a custom sleep timer to override the 15-minute default. More information is available here.

Streamlining With Streamlit

As we’ve seen, Streamlit’s simplicity and flexibility make it an ideal tool for quickly building web applications, even without a great deal of web development experience. The integration of Streamlit into Snowflake provides the opportunity to put your application within the same sandbox as your data, which has tremendous advantages, including security and shareability. This combination can allow data scientists, engineers, analysts, and developers to deliver lightweight applications capable of driving valuable business decisions. 

As a Snowflake Select Partner,  we can help you with everything from AI and analytics to data migration and integration, data warehousing, data modeling, and query optimization.  Read more information about leveraging Snowflake best practices here.

avatar

Ryan Moore

With over 20 years of experience as a Lead Software Architect, Principal Data Scientist, and technical author, Ryan Moore is the Head of Delivery and Solutions at Snow Fox Data and our resident Dataiku Neuron. He provides Data Science architecture and implementation consultation to organizations around the globe.

RELATED ARTICLES