Skip to content

Commit a4d33a6

Browse files
committed
merge main
2 parents 21b0a13 + 916fc4c commit a4d33a6

File tree

105 files changed

+1905
-752
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

105 files changed

+1905
-752
lines changed
-39.3 KB
Loading
-3.72 KB
Binary file not shown.
86.4 KB
Loading

blog/2023-06-05-discord-bot/index.mdx

Lines changed: 167 additions & 162 deletions
Large diffs are not rendered by default.

blog/2023-08-10-supabase-partnership/index.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,15 @@
11
---
22
authors: [henricourdent]
3-
tags:
4-
[
5-
'Supabase',
6-
'Partnership',
7-
'Database',
8-
]
3+
tags: ['Supabase', 'Partnership', 'Database']
94
image: ./0-header.png
105
---
116

127
# Windmill and Supabase partner for smooth integration between databases and internal tools
138

149
Windmill is proud to announce a partnership with [Supabase](https://supabase.com/) to easily integrate databases to interact with scripts, flows, and apps.
10+
1511
<!--truncate-->
12+
1613
Although we support multiple database providers, Supabase is by far the most recommended one due to its performance and security capabilities.
1714

1815
<br/>
@@ -65,4 +62,4 @@ Concretely for us, it opens the door for endless possibilities for our joint use
6562
className="border-2 rounded-xl object-cover w-full h-full dark:border-gray-800"
6663
controls
6764
src="/videos/supabase_wizard.mp4"
68-
/>
65+
/>

blog/2023-11-21-dedicated-workers/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ It is faster than AWS lambda: https://www.windmill.dev/docs/misc/benchmarks/aws_
4949

5050
### For Python and Typescript
5151

52-
Dedicated workers work with Typescript and Python scripts, they have the highest cold starts. Queries to databases such as PostgreSQL, MySQL, BigQuery, or bash and go scripts do not suffer from any cold starts and hence have the same benefits already without any compexity.
52+
Dedicated workers work with Typescript and Python scripts, they have the highest cold starts. Queries to databases such as PostgreSQL, MySQL, BigQuery, or Bash and Go scripts do not suffer from any cold starts and hence have the same benefits already without any complexity.
5353

5454
## How to assign dedicated workers to a script
5555

blog/2023-11-22-why-is-windmill-the-fastest-workflow-engine/index.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ hand-built workflow engine written on top of the amazing [BEAM](<https://en.wiki
134134

135135
There are tons of workflow engines, but not many of them are self-hostable and generic enough to support arbitrary workloads of jobs defined in code,
136136
and even those have restrictions:
137-
Some like Airflow and Prefect support only one runtime (Python). Windmill on the other hand supports Typescript/Javascript, Python, Go, Bash and direct SQL queries to BigQuery, Snowflake, Mysql, Postgresql. And its design makes it easy to add more upon request.
137+
Some like Airflow and Prefect support only one runtime (Python). Windmill on the other hand supports Typescript/Javascript, Python, Go, Bash and direct SQL queries to BigQuery, Snowflake, Mysql, Postgresql, MSSQL. And its design makes it easy to add more upon request.
138138
Some are notoriously hard to write for (because of complex SDKs, looking at you Airflow's XCOM or Temporal idempotency primitives) and deploy to. Windmill offers an [integrated DX to build and test workflows](/docs/flows/flow_editor) in a few minutes interactively in a mix of raw code for the steps and low-code (or YAML) for the DAG itself. It is also possible to define them wholly with code and full version control using our [VS Code extension](/blog/launch-week-1/vscode-extension).
139139

140140
One benefit of being very fast is that it makes running tests very fast too both in terms of latency to start and to run. Wasting time waiting for previews and tests to run is not fun.
@@ -303,7 +303,7 @@ json*path.map(|x| x.split(".").map(|x| x.to_string()).collect::<Vec<*>>())
303303

304304
## Workers efficiency
305305

306-
In normal mode, workers pull job one at a time, identify the language used by the job (python, typescript, go, bash, snowflake, Postgresql, mysql, mssql, bigquery) and then spawn the corresponding runtime then run the job.
306+
In normal mode, workers pull job one at a time, identify the language used by the job (Python, TypeScript, Go, Bash, SnowFlake, PostgreSql, MySql, MSSQL, BigQquery) and then spawn the corresponding runtime then run the job.
307307

308308
Workers run jobs bare, without running containers which gives us a performance boost compared to container based workflow engines. However, for sandboxing purposes, workers themselves can be run inside containers and can run each job in an nsjail sandbox.
309309

blog/2023-11-24-data-pipeline-orchestrator/index.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -119,8 +119,8 @@ In Windmill, you can just do:
119119

120120
```
121121
conn = duckdb.connect()
122-
s3_resource = wmill.get_resource("/path/to/resource")
123-
conn.execute(wmill.duckdb_connection_settings(s3_resource)["connection_settings_str"])
122+
# path/to/resource arg is optional and by default the workspace s3 resource will be used
123+
conn.execute(wmill.duckdb_connection_settings("/path/to/resource")["connection_settings_str"])
124124
125125
conn.sql("SELECT * FROM read_parquet(s3://windmill_bucket/file.parquet)")
126126
```
@@ -147,8 +147,8 @@ with s3.open("s3://windmill_bucket/file.parquet", mode="rb") as f:
147147
becomes in Windmill:
148148

149149
```python
150-
s3_resource = wmill.get_resource("/path/to/resource")
151-
s3 = s3fs.S3FileSystem(**wmill.polars_connection_settings(s3_resource))
150+
# /path/to/resource arg is optional and by default the workspace s3 resource will be used
151+
s3 = s3fs.S3FileSystem(**wmill.polars_connection_settings("/path/to/resource")["s3fs_args"])
152152
with s3.open("s3://windmill_bucket/file.parquet", mode="rb") as f:
153153
dataframe = pl.read_parquet(f)
154154
```
@@ -167,7 +167,7 @@ s3object = dict
167167
def main(input_dataset: s3object):
168168
# initialization: connect Polars to the workspace bucket
169169
s3_resource = wmill.get_resource("/path/to/resource")
170-
s3 = s3fs.S3FileSystem(wmill.duckdb_connection_settings(s3_resource))
170+
s3 = s3fs.S3FileSystem(wmill.polars_connection_settings("/path/to/resource")["s3fs_args"])
171171

172172
# reading data from s3:
173173
bucket = s3_resource["bucket"]
63.2 KB
Loading
178 KB
Loading

0 commit comments

Comments
 (0)