ripgrep

Options:

For example *.py or **/templates/**/*.html or datasette/** or !setup.py

24ways-datasette/24-ways-search-index.ipynb

449       "text/plain": [
450        "{'title': 'Why Bother with Accessibility?',\n",
451        " 'contents': 'Web accessibility (known in other fields as inclusive design or universal design) is the degree to which a website is available to as many people as possible. Accessibility is most often used to describe how people with disabilities can access the web.\\n\\nHow we approach accessibility\\n\\nIn the web community, there’s a surprisingly inconsistent approach to accessibility. There are some who are endlessly dedicated to accessible web design, and there are some who believe it so intrinsic to the web that it shouldn’t be considered a separate topic. Still, of those who are familiar with accessibility, there’s an overwhelming number of designers, developers, clients and bosses who just aren’t that bothered.\\n\\nOver the last few months I’ve spoken to a lot of people about accessibility, and I’ve heard the same reasons to ignore it over and over again. Let’s take a look at the most common excuses.\\n\\nExcuse 1: “People with disabilities don’t really use the web”\\n\\nAccessibility will make your site available to more people — the inclusion case\\n\\nIn the same way that the accessibility of a building isn’t just about access for wheelchair users, web accessibility isn’t just about blind users and screen readers. We can affect positively the lives of many people by making their access to the web easier.\\n\\nThere are four main types of disability that affect use of the web:\\n\\n\\n\\tVisual\\n\\tBlindness, low vision and colour-blindness\\n\\tAuditory\\n\\tProfoundly deaf and hard of hearing\\n\\tMotor\\n\\tThe inability to use a mouse, slow response time, limited fine motor control\\n\\tCognitive\\n\\tLearning difficulties, distractibility, the inability to focus on large amounts of information\\n\\n\\nNone of these disabilities are completely black and white\\n\\nExamining deafness, it’s clear from the medical scale that there are many grey areas between full hearing and total deafness:\\n\\n\\n\\tmild\\n\\tmoderate\\n\\tmoderately severe\\n\\tsevere\\n\\tprofound\\n\\ttotally deaf\\n\\n\\nFor eyesight, and brain conditions that affect what users see, there is a huge range of conditions and challenges:\\n\\n\\n\\tastigmatism\\n\\tcolour blindness\\n\\takinetopsia (motion blindness)\\n\\tscotopic visual sensitivity (visual stress related to light)\\n\\tvisual agnosia (impaired recognition or identification of objects)\\n\\n\\nWhile we might have medical and government-recognised definitions that tell us what makes a disability, day-to-day life is not so straightforward. People experience varying degrees of different conditions, and often one or more conditions at a time, creating a false divide when you view disability in terms of us and them.\\n\\nImpairments aren’t always permanent\\n\\nAs we age, we’re more likely to experience different levels of visual, auditory, motor and cognitive impairments. We might have an accident or illness that affects us temporarily. We might struggle more earlier or later in the day. There are so many little physiological factors that affect the way people interact with the web that we can’t afford to make any assumptions based on our own limited experiences.\\n\\nImpairments might be somewhere between the user and the website\\n\\nThere are also impairments that aren’t directly related to the user. Environmental factors have a huge effect on the way people interact with the web. These could be:\\n\\n\\n\\tLow bandwidth, or intermittent internet connection\\n\\tBright light, rain, or other weather-based conditions\\n\\tNoisy environments, or a location where the user doesn’t want to disturb their neighbours with sound\\n\\tBrowsing with mobile devices, games consoles and other non-desktop devices\\n\\tBrowsing with legacy browsers or operating systems\\n\\n\\nSuch environmental factors show that it’s not just those with physical impairments who benefit from more accessible websites. We started designing responsive websites so we could be more future-friendly, and with a shared goal of better optimised experiences, accessibility should be at the core of responsive web design.\\n\\nExcuse 2: “We don’t want to affect the experience for the majority of our users”\\n\\nAccessibility will improve your site for all your users — the usability case\\n\\nOn a basic level, the different disability groups, as shown in the inclusion case, equate to simple usability goals:\\n\\n\\n\\tVisual – make it easy to read\\n\\tAuditory – make it easy to hear\\n\\tMotor – make it easy to interact\\n\\tCognitive – make it easy to understand and focus\\n\\n\\nTaking care to ensure good usability in these areas will also have an impact on accessibility. Unless your site is catering specifically to a particular disability, where extreme optimisation is most beneficial, taking care to design with accessibility in mind will rarely negatively affect the experience of your wider audience.\\n\\nExcuse 3: “We don’t have the budget for accessibility”\\n\\nAccessibility will make you money — the business case\\n\\nBy reducing your audience through ignoring accessibility, you’re potentially excluding the income from those users. Designing with accessibility in mind from the beginning of a project makes it easier to make small inexpensive optimisations as part of the design and development process, rather than bolting on costly updates to increase your potential audience later on.\\n\\nThe following are excerpts from a white paper about companies that increased the accessibility of their websites to comply with government regulation.\\n\\n\\n\\tImprovements in accessibility doubled Legal and General’s life insurance sales online.\\n\\n\\n\\n\\tImprovements in accessibility increased Tesco’s grocery home delivery sales by £13 million in 2005… To their surprise they found that many normal visitors preferred the ease of navigation and improved simplicity of the [parallel] accessible site and switched to use it. Tesco have replaced their ‘normal’ site with their accessible version and expect a further increase in revenues.\\n\\n\\n\\n\\tImprovements in accessibility increased Virgin.net sales by 68%.\\n\\n\\nStatistics all from WSI white paper: Improve your website’s usability and accessibility to increase sales (PDF).\\n\\nExcuse 4: “Accessible websites are ugly”\\n\\nAccessibility won’t stop your site from being beautiful — the beauty case\\n\\nMany people use ugly accessible websites as proof that all accessible websites are ugly. This just isn’t the case. I’ve compiled some examples of beautiful and accessible websites with screenshots of how they look through the Color Oracle simulator and how they perform when run through Webaim’s Wave accessibility checker tool.\\n\\nWhile automated tools are no substitute for real users, they can help you learn more about good practices, and give you guidance on where your site needs improvements to make it more accessible.\\n\\nAmazon.co.uk\\n\\nIt may not be a decorated beauty, but Amazon is often first in functional design. It’s a huge website with a lot of interactive content, but it generates just five errors on the Wave test, and is easy to read under a Color Oracle filter.\\n\\n Screenshot of Amazon website\\n Screenshot of Amazon’s Wave results – five errors\\n Screenshot of Amazon through a Color Oracle filter\\n\\n24 ways\\n\\nWhen Tim Van Damme redesigned 24 ways back in 2007, it was a striking and unusual design that showed what could be achieved with CSS and some imagination. Despite the complexity of the design, it gets an outstanding zero errors on the Wave test, and is still readable under a Color Oracle filter.\\n\\n Screenshot of pre-2013 24 ways website design\\n Screenshot of 24 ways Wave results – zero errors\\n Screenshot of 24ways through a Color Oracle filter\\n\\nOpera’s Shiny Demos\\n\\nDemos and prototypes are notorious for ignoring accessibility, but Opera’s Shiny Demos site shows how exploring new technologies doesn’t have to exclude anyone. It only gets one error on the Wave test, and looks fine under a Color Oracle filter.\\n\\n Screenshot of Opera’s Shiny Demos website\\n Screenshot of Opera’s Shiny Demos Wave results – 1 error\\n Screenshot of Opera’s Shiny Demos through a Color Oracle filter\\n\\nSoundCloud\\n\\nWhen a site is more app-like, relying on more interaction from the user, accessibility can be more challenging. However, SoundCloud only gets one error on the Wave test, and the colour contrast holds up well under a Color Oracle filter.\\n\\n Screenshot of SoundCloud website\\n Screenshot of SoundCloud’s Wave results – one error\\n Screenshot of SoundCloud through a Color Oracle filter\\n\\nEducation and balance\\n\\nAs with most web design, doing accessibility well is about combining your knowledge of accessibility with your project’s context to create a balance that serves your users’ needs. Your types of content and interactions will dictate one set of constraints. Your users’ needs and goals will dictate another. In broad terms, web design as a practice is finding the equilibrium between these constraints.\\n\\nAnd then there’s just caring. The web as a platform is open, affordable and available to many. Accessibility is our way to ensure that nobody gets shut out.',\n",
452        " 'year': '2013',\n",
453        " 'author': 'Laura Kalbag',\n",

asgi-log-to-sqlite/setup.py

24      py_modules=["asgi_log_to_sqlite"],
25      install_requires=["sqlite_utils~=2.3.1"],
26      extras_require={"test": ["pytest", "pytest-asyncio", "asgiref==3.1.2"]},
27  )

asgi-log-to-sqlite/test_asgi_log_to_sqlite.py

1   from asgiref.testing import ApplicationCommunicator
2   from asgi_log_to_sqlite import AsgiLogToSqlite
3   import sqlite_utils
4   import pytest
5   
6   
17  
18  
19  @pytest.mark.asyncio
20  async def test_log_to_sqlite(tmpdir):
21      logfile = str(tmpdir / "log.db")
22      message = {"type": "http", "http_version": "1.0", "method": "GET", "path": "/"}
44  
45  
46  @pytest.mark.asyncio
47  async def test_log_to_sqlite_with_more_fields(tmpdir):
48      logfile = str(tmpdir / "log2.db")
49      message = {

azure-functions-datasette/README.md

23  ```
24  az storage account create \
25      --name datasettestorage \
26      --location westeurope \
27      --resource-group datasette-rg \
36      --runtime-version 3.8 \
37      --functions-version 3 \
38      --storage-account datasettestorage \
39      --os-type linux \
40      --name azure-functions-datasette

buildpack-datasette/README.md

3   Repository template for creating a new Datasette buildpack instance 
4   
5   More details: https://docs.datasette.io/en/latest/deploying.html#deploying-using-buildpacks
6   
7   This repository has been succesfully deployed to the following providers:

covid-19-datasette/README.md

1   # covid-19-datasette
2   
3   [![Fetch latest data and deploy with Datasette](https://github.com/simonw/covid-19-datasette/workflows/Fetch%20latest%20data%20and%20deploy%20with%20Datasette/badge.svg)](https://github.com/simonw/covid-19-datasette/actions?query=workflow%3A%22Fetch+latest+data+and+deploy+with+Datasette%22)
4   
5   Deploys a Datasette instance with data from the following sources:
40  The New York Times has [a comprehensive README](https://github.com/nytimes/covid-19-data/blob/master/README.md) describing how their data is sourced. You should read it! They announced their data in [We’re Sharing Coronavirus Case Data for Every U.S. County](https://www.nytimes.com/article/coronavirus-county-data-us.html).
41  
42  They are using the data for their [Coronavirus in the U.S.: Latest Map and Case Count](https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html) article.
43  
44  ## The Los Angeles Times
65  This repository includes CSV data for both of these tables.
66  
67  The [latest_ny_times_counties_with_populations](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations) view uses this data to calculate cases and deaths per million for US counties, based on the latest county figures from the New York Times.
68  
69  ## Example issues
70  
71  * Remember: the number of reported cases is very heavily influenced by the availability of testing.
72  * [This Twitter thread](https://twitter.com/politicalmath/status/1243950120598556672) is an excellent overview of the challenges involved in comparing numbers from different states and countries.
73  * On the 23rd March 2020 Johns Hopkins [added four new columns](https://github.com/CSSEGISandData/COVID-19/commit/e748b6d8a55e4a88371af56b129ababe1712522d) to the daily CSV file: `admin2`, `fips`, `active` and `combined_key`. These are not present in older CSV files. [#4](https://github.com/simonw/covid-19-datasette/issues/4).
75  * Santa Clara County appears to be represented as `Santa Clara, CA` in some records and `Santa Clara County, CA` in others - [example](https://covid-19.datasettes.com/covid/johns_hopkins_csse_daily_reports?province_or_state__contains=santa+clara&_sort_desc=day#g.mark=bar&g.x_column=day&g.x_type=ordinal&g.y_column=confirmed&g.y_type=quantitative).
76  * Passengers from the Diamond Princess cruise are represented by a number of different rows with "From Diamond Princess" in their `province_or_state` column - [example](https://covid-19.datasettes.com/covid/johns_hopkins_csse_daily_reports?_facet=province_or_state&_facet=country_or_region&province_or_state__contains=from+diamond&_sort_desc=day).
77  * The [latest_ny_times_counties_with_populations](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations) view omits some counties, notably all New York City counties, because the New York Times groups all New York City data into rows with `county` equal to "New York City" and an empty `fips` column. Thus total cases represented in [latest_ny_times_counties_with_populations](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations) are lower than total cases represented in [ny_times_us_states](https://covid-19.datasettes.com/covid/ny_times_us_states) by at least the number of cases in New York City.

covid-19-datasette/build_database.py

199 
200     # More views
201     db.create_view("latest_ny_times_counties_with_populations", """
202     select
203       ny_times_us_counties.date,

covid-19-datasette/metadata.json

195                     "source_url": "https://www2.census.gov/programs-surveys/popest/tables/2010-2019/counties/totals/co-est2019-annres.xlsx"
196                 },
197                 "latest_ny_times_counties_with_populations": {
198                     "sort_desc": "cases_per_million",
199                     "description_html": "<div style='border: 2px solid red; padding: 1em'>⚠️ Consult <a href='https://github.com/simonw/covid-19-datasette/blob/main/README.md'>the README</a> for warnings about using and building on this data. Also review <a href='https://fivethirtyeight.com/features/why-its-so-freaking-hard-to-make-a-good-covid-19-model/'>Why It’s So Freaking Hard To Make A Good COVID-19 Model</a> and <a href='https://medium.com/nightingale/ten-considerations-before-you-create-another-chart-about-covid-19-27d3bd691be8'>Ten Considerations Before You Create Another Chart About COVID-19</a>.</div>",

csvs-to-sqlite/CHANGELOG.md

104   Similar issue to a8ab5248f4a - when we extracted a column that included a
105   mixture of both integers and NaNs things went a bit weird.
106 - Added test for column extraction.
107 - Fixed bug with accidentally hard-coded column.
108 
114 
115   https://docs.travis-ci.com/user/deployment/pypi/
116 - Fixed tests for Python 2.
117 - Ensure columns of ints + NaNs map to SQLite INTEGER.
118 
133   Which is itself an updated version of the pattern described in http://dan-blanchard.roughdraft.io/7045057-quicker-travis-builds-that-rely-on-numpy-and-scipy-using-miniconda
134 
135   I had to switch to running `pytest` directly, because `python setup.py test` was still trying to install a pandas package that involved compiling everything from scratch (which is why Travis CI builds were taking around 15 minutes).
136 - Don't include an `index` column - rely on SQLite rowid instead.
137 
144 - Configure Travis CI.
145 
146   Also made it so `python setup.py test` runs the tests.
147 - Mechanism for converting columns into separate tables.
148 

csvs-to-sqlite/setup.cfg

1   [aliases]
2   test=pytest
3   
4   [bdist_wheel]

csvs-to-sqlite/setup.py

30          "six",
31      ],
32      extras_require={"test": ["pytest"]},
33      tests_require=["csvs-to-sqlite[test]"],
34      entry_points="""
35          [console_scripts]

datasette/README.md

5   [![Python 3.x](https://img.shields.io/pypi/pyversions/datasette.svg?logo=python&logoColor=white)](https://pypi.org/project/datasette/)
6   [![Tests](https://github.com/simonw/datasette/workflows/Test/badge.svg)](https://github.com/simonw/datasette/actions?query=workflow%3ATest)
7   [![Documentation Status](https://readthedocs.org/projects/datasette/badge/?version=latest)](https://docs.datasette.io/en/latest/?badge=latest)
8   [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette/blob/main/LICENSE)
9   [![docker: datasette](https://img.shields.io/badge/docker-datasette-blue)](https://hub.docker.com/r/datasetteproject/datasette)
18  
19  * [datasette.io](https://datasette.io/) is the official project website
20  * Latest [Datasette News](https://datasette.io/news)
21  * Comprehensive documentation: https://docs.datasette.io/
22  * Examples: https://datasette.io/examples
23  * Live demo of current main: https://latest.datasette.io/
24  * Support questions, feedback? Join our [GitHub Discussions forum](https://github.com/simonw/datasette/discussions)
25  

datasette/pytest.ini

1   [pytest]
2   filterwarnings=
3       # https://github.com/pallets/jinja/issues/927
8       ignore:.*current_task.*:PendingDeprecationWarning
9   markers =
10      serial: tests to avoid using with pytest-xdist

datasette/setup.cfg

1   [aliases]
2   test=pytest
3   
4   [flake8]

datasette/setup.py

35          "Documentation": "https://docs.datasette.io/en/stable/",
36          "Changelog": "https://docs.datasette.io/en/stable/changelog.html",
37          "Live demo": "https://latest.datasette.io/",
38          "Source code": "https://github.com/simonw/datasette",
39          "Issues": "https://github.com/simonw/datasette/issues",
40          "CI": "https://github.com/simonw/datasette/actions?query=workflow%3ATest",
41      },
42      packages=find_packages(exclude=("tests",)),
43      package_data={"datasette": ["templates/*.html"]},
44      include_package_data=True,
66          datasette=datasette.cli:cli
67      """,
68      setup_requires=["pytest-runner"],
69      extras_require={
70          "docs": ["sphinx_rtd_theme", "sphinx-autobuild"],
71          "test": [
72              "pytest>=5.2.2,<6.3.0",
73              "pytest-xdist>=2.2.1,<2.4",
74              "pytest-asyncio>=0.10,<0.16",
75              "beautifulsoup4>=4.8.1,<4.10.0",
76              "black==21.6b0",
77              "pytest-timeout>=1.4.2,<1.5",
78              "trustme>=0.7,<0.9",
79          ],
80      },
81      tests_require=["datasette[test]"],
82      classifiers=[
83          "Development Status :: 4 - Beta",

datasette/update-docs-help.py

1   from click.testing import CliRunner
2   from datasette.cli import cli
3   from pathlib import Path

datasette-allow-permissions-debug/setup.py

31      install_requires=["datasette"],
32      extras_require={
33          "test": ["pytest", "pytest-asyncio", "httpx"]
34      },
35      tests_require=["datasette-allow-permissions-debug[test]"],
36  )

datasette-atom/README.md

110     "datasette-atom": {
111       "allow_unsafe_html_in_canned_queries": {
112         "museums": ["latest", "moderation"]
113       }
114     }
116 }
117 ```
118 This will disable Bleach just for the canned queries called `latest` and `moderation` in the `museums.db` database.

datasette-atom/setup.py

30      entry_points={"datasette": ["atom = datasette_atom"]},
31      install_requires=["datasette>=0.49", "feedgen", "bleach"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-atom[test]"],
34  )

datasette-auth-existing-cookies/setup.py

34      install_requires=["appdirs", "httpx", "itsdangerous"],
35      extras_require={
36          "test": ["datasette", "pytest", "pytest-asyncio", "asgiref~=3.1.2"]
37      },
38      tests_require=["datasette-auth-existing-cookies[test]"],
39  )

datasette-auth-existing-cookies/test_datasette_auth_existing_cookies.py

1   from asgiref.testing import ApplicationCommunicator
2   from urllib.parse import quote
3   import json
4   import httpx
5   import pytest
6   
7   from itsdangerous import URLSafeSerializer
22  
23  
24  @pytest.mark.asyncio
25  @pytest.mark.parametrize("path", ["/", "/fixtures", "/foo/bar"])
26  @pytest.mark.parametrize("next_secret", [None, "secret"])
27  async def test_redirects_to_login_page(next_secret, path):
28      auth_app = ExistingCookiesAuthTest(
29          hello_world_app,
53  
54  
55  @pytest.mark.asyncio
56  @pytest.mark.parametrize("trust_it", [True, False])
57  async def test_redirects_to_login_page_trusting_x_forwarded_proto(trust_it):
58      auth_app = ExistingCookiesAuthTest(
59          hello_world_app,
86  
87  
88  @pytest.mark.asyncio
89  async def test_allow_access_if_auth_is_returned():
90      auth_app = ExistingCookiesAuthTest(
91          hello_world_app,
110 
111 
112 @pytest.mark.asyncio
113 async def test_access_denied():
114     auth_app = ExistingCookiesAuthTest(
115         hello_world_app,
130 
131 
132 @pytest.mark.asyncio
133 async def test_headers_to_forward():
134     auth_app = ExistingCookiesAuthTest(
135         hello_world_app,
163 
164 
165 def test_asgi_wrapper():
166     app = object()
167     config = {
200 
201 
202 @pytest.mark.asyncio
203 async def test_scope_auth_allows_access():
204     # This test can't use httpx because I need a custom scope
205     scope = {
206         "type": "http",

datasette-auth-github/README.md

75          ],
76          "gh_teams": [
77              "dogsheep/test"
78          ]
79      }

datasette-auth-github/setup.py

26      install_requires=["datasette>=0.51"],
27      extras_require={
28          "test": [
29              "datasette",
30              "pytest",
31              "pytest-asyncio",
32              "sqlite-utils",
33              "pytest-httpx",
34          ]
35      },
36      tests_require=["datasette-auth-github[test]"],
37      package_data={"datasette_auth_github": ["templates/*.html"]},
38  )

datasette-auth-passwords/setup.py

30      entry_points={"datasette": ["auth_passwords = datasette_auth_passwords"]},
31      install_requires=["datasette>=0.56.1"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-auth-passwords[test]"],
34      package_data={
35          "datasette_auth_passwords": [

datasette-auth-passwords/README.md

122     pipenv shell
123 
124 Now install the dependencies and tests:
125 
126     pip install -e '.[test]'
127 
128 To run the tests:
129 
130     pytest

datasette-auth-tokens/README.md

15  ## Hard-coded tokens
16  
17  Read about Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html).
18  
19  This plugin lets you configure secret API tokens which can be used to make authenticated requests to Datasette.

datasette-auth-tokens/setup.py

25      entry_points={"datasette": ["auth_tokens = datasette_auth_tokens"]},
26      install_requires=["datasette>=0.44",],
27      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
28      tests_require=["datasette-auth-tokens[test]"],
29  )

datasette-backup/README.md

32      pipenv shell
33  
34  Now install the dependencies and tests:
35  
36      pip install -e '.[test]'
37  
38  To run the tests:
39  
40      pytest

datasette-backup/setup.py

30      entry_points={"datasette": ["backup = datasette_backup"]},
31      install_requires=["datasette", "sqlite-dump>=0.1.1"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
33      tests_require=["datasette-backup[test]"],
34  )

datasette-basemap/README.md

38      pipenv shell
39  
40  Now install the dependencies and tests:
41  
42      pip install -e '.[test]'
43  
44  To run the tests:
45  
46      pytest

datasette-basemap/setup.py

30      entry_points={"datasette": ["basemap = datasette_basemap"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-basemap[test]"],
34      package_data={"datasette_basemap": ["data/*"]},
35      python_requires=">=3.6",

datasette-block-robots/README.md

68      pipenv shell
69  
70  Now install the dependencies and tests:
71  
72      pip install -e '.[test]'
73  
74  To run the tests:
75  
76      pytest

datasette-block-robots/setup.py

30      entry_points={"datasette": ["block_robots = datasette_block_robots"]},
31      install_requires=["datasette>=0.50"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-block-robots[test]"],
34  )

datasette-block/README.md

48      pipenv shell
49  
50  Now install the dependencies and tests:
51  
52      pip install -e '.[test]'
53  
54  To run the tests:
55  
56      pytest

datasette-block/setup.py

30      entry_points={"datasette": ["block = datasette_block"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "asgi-lifespan"]},
33      tests_require=["datasette-block[test]"],
34      python_requires=">=3.6",
35  )

datasette-car-2019/README.md

122 ## California Civic Data
123 
124 https://www.californiacivicdata.org/ offers CSV downloads of the latest California campaign finance data. You can download it from https://calaccess.download/latest/flat.zip
125 
126     $ unzip flat.zip

datasette-bplist/setup.py

25      entry_points={"datasette": ["bplist = datasette_bplist"]},
26      install_requires=["datasette", "bpylist"],
27      extras_require={"test": ["pytest"]},
28      tests_require=["datasette-bplist[test]"],
29  )

datasette-cluster-map/README.md

162     pipenv shell
163 
164 Now install the dependencies and tests:
165 
166     pip install -e '.[test]'
167 
168 To run the tests:
169 
170     pytest

datasette-cluster-map/setup.py

36      },
37      install_requires=["datasette>=0.54", "datasette-leaflet>=0.2.2"],
38      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
39      tests_require=["datasette-cluster-map[test]"],
40  )

datasette-clone/README.md

20  To download copies of all `.db` files from an instance, run:
21  
22      datasette-clone https://latest.datasette.io
23  
24  You can provide an optional second argument to specify a directory:
25  
26      datasette-clone https://latest.datasette.io /tmp/here-please
27  
28  The command stores its own copy of a `databases.json` manifest and uses it to only download databases that have changed the next time you run the command.
32  If your instance is protected by an API token, you can use `--token` to provide it:
33  
34      datasette-clone https://latest.datasette.io --token=xyz
35  
36  For verbose output showing what the tool is doing, use `-v`.

datasette-clone/setup.py

33      """,
34      install_requires=["requests", "click"],
35      extras_require={"test": ["pytest", "requests-mock"]},
36      tests_require=["datasette-auth-github[test]"],
37  )

datasette-column-inspect/setup.py

27          "datasette",
28      ],
29      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
30      tests_require=["datasette-column-inspect[test]"],
31      package_data={"datasette_column_inspect": ["templates/*.html"]},
32  )

datasette-configure-asgi/setup.py

25      py_modules=["datasette_configure_asgi"],
26      extras_require={
27          "test": ["pytest", "pytest-asyncio", "asgiref==3.1.2", "datasette"]
28      },
29  )

datasette-configure-asgi/test_datasette_configure_asgi.py

12  
13  
14  def test_asgi_wrapper():
15      app = object()
16      wrapper = asgi_wrapper(FakeDatasette({"class": "functools.wraps"}))
20  
21  
22  def test_asgi_wrapper_with_args():
23      app = object()
24      wrapper = asgi_wrapper(

datasette-configure-fts/setup.py

25      entry_points={"datasette": ["configure_fts = datasette_configure_fts"]},
26      install_requires=["datasette>=0.51", "sqlite-utils>=2.10"],
27      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
28      tests_require=["datasette-configure-fts[test]"],
29      package_data={"datasette_configure_fts": ["templates/*.html"]},
30  )

datasette-copyable/README.md

37      pipenv shell
38  
39  Now install the dependencies and tests:
40  
41      pip install -e '.[test]'
42  
43  To run the tests:
44  
45      pytest

datasette-copyable/setup.py

30      entry_points={"datasette": ["copyable = datasette_copyable"]},
31      install_requires=["datasette>=0.49", "tabulate"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
33      tests_require=["datasette-copyable[test]"],
34      package_data={"datasette_copyable": ["templates/*.html"]},
35  )

datasette-cors/README.md

43  ## Testing it
44  
45  To test this plugin out, run it locally by saving one of the above examples as `metadata.json` and running this:
46  
47      $ datasette --memory -m metadata.json

datasette-cors/setup.py

26      install_requires=["asgi-cors~=0.3"],
27      extras_require={
28          "test": ["datasette~=0.29", "pytest", "pytest-asyncio", "asgiref~=3.1.2"]
29      },
30      tests_require=["datasette-cors[test]"],
31  )

datasette-cors/test_datasette_cors.py

1   import json
2   
3   import pytest
4   from asgiref.testing import ApplicationCommunicator
5   from datasette.app import Datasette
6   
7   
8   @pytest.mark.asyncio
9   async def test_datasette_cors_plugin_installed():
10      instance = ApplicationCommunicator(
11          Datasette([], memory=True).app(),
32  
33  
34  @pytest.mark.asyncio
35  @pytest.mark.parametrize(
36      "request_origin,expected_cors_header",
37      [
41      ],
42  )
43  async def test_asgi_cors_hosts(request_origin, expected_cors_header):
44      instance = ApplicationCommunicator(
45          Datasette(

datasette-css-properties/README.md

20  Once installed, this plugin adds a `.css` output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:
21  
22  Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces:
23  
24  ```css
35  
36  ```html
37  <link rel="stylesheet" href="https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css">
38  <style>
39  .attraction-name:after { content: var(--name); }
44  Values will be quoted as CSS strings by default. If you want to return a "raw" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the `?_raw=column-name` parameter. This can be passed multiple times.
45  
46  Consider [this example query](https://latest-with-plugins.datasette.io/github?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B):
47  
48  ```sql
57  ```
58  
59  This returns the first 6 characters of the most recently authored commit with a `#` prefix. The `.css` [output rendered version](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B) looks like this:
60  
61  ```css
65  ```
66  
67  Adding `?_raw=custom-bg` to the URL produces [this instead](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B&_raw=custom-bg):
68  
69  ```css
93      pipenv shell
94  
95  Now install the dependencies and tests:
96  
97      pip install -e '.[test]'
98  
99  To run the tests:
100 
101     pytest

datasette-css-properties/setup.py

30      entry_points={"datasette": ["css_properties = datasette_css_properties"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-css-properties[test]"],
34      python_requires=">=3.6",
35  )

datasette-dateutil/README.md

43  ```
44  
45  [Try that query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_parse%28%2210+october+2020+3pm%22%29%2C%0D%0A++dateutil_parse_fuzzy%28%22This+is+due+10+september%22%29%2C%0D%0A++dateutil_parse%28%221%2F2%2F2020%22%29%2C%0D%0A++dateutil_parse%28%222020-03-04%22%29%2C%0D%0A++dateutil_parse_dayfirst%28%222020-03-04%22%29%3B)
46  
47  ### Calculating Easter
49  - `dateutil_easter(year)` - returns the date for Easter in that year, for example `dateutil_easter("2020")` returns `2020-04-12`.
50  
51  [Example Easter query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_easter%282019%29%2C%0D%0A++dateutil_easter%282020%29%2C%0D%0A++dateutil_easter%282021%29)
52  
53  ### JSON arrays of dates
60  - `dateutil_dates_between('1 january 2020', '5 jan 2020', 0)` - set the optional third argument to `0` to specify that you would like this to be exclusive of the last day. This example returns `["2020-01-01", "2020-01-02", "2020-01-03", "2020-01-04"]`.
61  
62  [Try these queries](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_dates_between%28%271+january+2020%27%2C+%275+jan+2020%27%29%2C%0D%0A++dateutil_dates_between%28%271+january+2020%27%2C+%275+jan+2020%27%2C+0%29)
63  
64  The `dateutil_rrule()` and `dateutil_rrule_date()` functions accept the iCalendar standard ``rrule` format - see [the dateutil documentation](https://dateutil.readthedocs.io/en/stable/rrule.html#rrulestr-examples) for more examples.
79    );
80  ```
81  [Try the rrule example query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_rrule('FREQ%3DHOURLY%3BCOUNT%3D5')%2C%0D%0A++dateutil_rrule_date(%0D%0A++++'FREQ%3DDAILY%3BCOUNT%3D3'%2C%0D%0A++++'1st+jan+2020'%0D%0A++)%3B)
82  
83  ### Joining data using json_each()
93    )
94  ```
95  [Try that query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++value+as+date%0D%0Afrom%0D%0A++json_each%28%0D%0A++++dateutil_dates_between%28%271+Jan+2019%27%2C+%2731+Jan+2019%27%29%0D%0A++%29)
96  
97  You can run joins against this table by assigning it a name using SQLite's [support for Common Table Expressions (CTEs)](https://sqlite.org/lang_with.html).
98  
99  This example query uses `substr(created, 0, 11)` to retrieve the date portion of the `created` column in the [facetable demo table](https://latest-with-plugins.datasette.io/fixtures/facetable), then joins that against the table of days in January to calculate the count of rows created on each day. The `LEFT JOIN` against `days_in_january` ensures that days which had no created records are still returned in the results, with a count of 0.
100 
101 ```sql
123   days_in_january.date;
124 ```
125 [Try that query](https://latest-with-plugins.datasette.io/fixtures?sql=with+created_dates+as+%28%0D%0A++select%0D%0A++++substr%28created%2C+0%2C+11%29+as+date%0D%0A++from%0D%0A++++facetable%0D%0A%29%2C%0D%0Adays_in_january+as+%28%0D%0A++select%0D%0A++++value+as+date%0D%0A++from%0D%0A++++json_each%28%0D%0A++++++dateutil_dates_between%28%271+Jan+2019%27%2C+%2731+Jan+2019%27%29%0D%0A++++%29%0D%0A%29%0D%0Aselect%0D%0A++days_in_january.date%2C%0D%0A++count%28created_dates.date%29+as+total%0D%0Afrom%0D%0A++days_in_january%0D%0A++left+join+created_dates+on+days_in_january.date+%3D+created_dates.date%0D%0Agroup+by%0D%0A++days_in_january.date%3B#g.mark=bar&g.x_column=date&g.x_type=ordinal&g.y_column=total&g.y_type=quantitative) with a bar chart rendered using the [datasette-vega](https://github.com/simonw/datasette-vega) plugin.
126 
127 ## Development
137     pipenv shell
138 
139 Now install the dependencies and tests:
140 
141     pip install -e '.[test]'
142 
143 To run the tests:
144 
145     pytest

datasette-dateutil/setup.py

30      entry_points={"datasette": ["dateutil = datasette_dateutil"]},
31      install_requires=["datasette", "python-dateutil"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-dateutil[test]"],
34  )

datasette-debug-asgi/setup.py

30      entry_points={"datasette": ["debug_asgi = datasette_debug_asgi"]},
31      install_requires=["datasette>=0.50"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-debug-asgi[test]"],
34      python_requires=">=3.6",
35  )

datasette-debug-asgi/test_datasette_debug_asgi.py

1   from datasette.app import Datasette
2   import pytest
3   
4   
5   @pytest.mark.asyncio
6   async def test_datasette_debug_asgi():
7       ds = Datasette([], memory=True)
8       response = await ds.client.get("/-/asgi-scope")

datasette-dns/README.md

34      pipenv shell
35  
36  Now install the dependencies and tests:
37  
38      pip install -e '.[test]'
39  
40  To run the tests:
41  
42      pytest

datasette-dns/setup.py

30      entry_points={"datasette": ["dns = datasette_dns"]},
31      install_requires=["datasette", "dnspython"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "pytest-mock"]},
33      tests_require=["datasette-dns[test]"],
34  )

datasette-edit-schema/README.md

49      pipenv shell
50  
51  Now install the dependencies and tests:
52  
53      pip install -e '.[test]'
54  
55  To run the tests:
56  
57      pytest

datasette-edit-schema/setup.py

28          "sqlite-utils>=2.21",
29      ],
30      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
31      tests_require=["datasette-edit-schema[test]"],
32      package_data={"datasette_edit_schema": ["templates/*.html", "static/*.js"]},
33  )

datasette-edit-templates/README.md

33      pipenv shell
34  
35  Now install the dependencies and tests:
36  
37      pip install -e '.[test]'
38  
39  To run the tests:
40  
41      pytest

datasette-edit-templates/setup.py

36          ],
37      },
38      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
39      tests_require=["datasette-edit-templates[test]"],
40      python_requires=">=3.6",
41  )

datasette-export-notebook/README.md

20  ## Demo
21  
22  You can see this plugin in action on the [latest-with-plugins.datasette.io](https://latest-with-plugins.datasette.io/) Datasette instance - for example on [/github/commits.Notebook](https://latest-with-plugins.datasette.io/github/commits.Notebook).
23  
24  ## Development
34      pipenv shell
35  
36  Now install the dependencies and tests:
37  
38      pip install -e '.[test]'
39  
40  To run the tests:
41  
42      pytest

datasette-export-notebook/setup.py

30      entry_points={"datasette": ["export_notebook = datasette_export_notebook"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "sqlite-utils"]},
33      tests_require=["datasette-export-notebook[test]"],
34      package_data={"datasette_export_notebook": ["templates/*.html"]},
35      python_requires=">=3.6",

datasette-glitch/setup.py

30      entry_points={"datasette": ["glitch = datasette_glitch"]},
31      install_requires=["datasette>=0.45"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-glitch[test]"],
34  )

datasette-haversine/setup.py

25      entry_points={"datasette": ["haversine = datasette_haversine"]},
26      install_requires=["datasette", "haversine"],
27      extras_require={"test": ["pytest"]},
28      tests_require=["datasette-haversine[test]"],
29  )

datasette-graphql/README.md

280 {
281     "databases": {
282         "test": {
283             "tables": {
284                 "repos": {
411     pipenv shell
412 
413 Now install the dependencies and tests:
414 
415     pip install -e '.[test]'
416 
417 To run the tests:
418 
419     pytest

datasette-graphql/setup.py

30      entry_points={"datasette": ["graphql = datasette_graphql"]},
31      install_requires=["datasette>=0.57", "graphene>=2.0,<3.0", "sqlite-utils", "wrapt"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-graphql[test]"],
34      package_data={"datasette_graphql": ["templates/*.html", "static/*"]},
35  )

datasette-ics/setup.py

25      entry_points={"datasette": ["ics = datasette_ics"]},
26      install_requires=["datasette>=0.49", "ics==0.7"],
27      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
28      tests_require=["datasette-ics[test]"],
29  )

datasette-import-table/README.md

34      pipenv shell
35  
36  Now install the dependencies and tests:
37  
38      pip install -e '.[test]'
39  
40  To run the tests:
41  
42      pytest

datasette-import-table/setup.py

30      entry_points={"datasette": ["import_table = datasette_import_table"]},
31      install_requires=["datasette", "httpx", "sqlite-utils"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "pytest-httpx"]},
33      tests_require=["datasette-import-table[test]"],
34      package_data={"datasette_import_table": ["templates/*.html"]},
35  )

datasette-indieauth/README.md

11  ## Demo
12  
13  You can try out the latest version of this plugin at [datasette-indieauth-demo.datasette.io](https://datasette-indieauth-demo.datasette.io/-/indieauth)
14  
15  ## Installation
78      pipenv shell
79  
80  Now install the dependencies and tests:
81  
82      pip install -e '.[test]'
83  
84  To run the tests:
85  
86      pytest

datasette-indieauth/setup.py

31      install_requires=["datasette"],
32      extras_require={
33          "test": ["pytest", "pytest-asyncio", "httpx", "pytest-httpx", "mf2py"]
34      },
35      tests_require=["datasette-indieauth[test]"],
36      package_data={"datasette_indieauth": ["templates/*.html"]},
37      python_requires=">=3.6",

datasette-insert/README.md

231     pipenv shell
232 
233 Now install the dependencies and tests:
234 
235     pip install -e '.[test]'
236 
237 To run the tests:
238 
239     pytest

datasette-insert/setup.py

31      install_requires=["datasette>=0.46", "sqlite-utils"],
32      extras_require={
33          "test": ["pytest", "pytest-asyncio", "httpx", "datasette-auth-tokens"]
34      },
35      tests_require=["datasette-insert[test]"],
36  )

datasette-init/README.md

85      pipenv shell
86  
87  Now install the dependencies and tests:
88  
89      pip install -e '.[test]'
90  
91  To run the tests:
92  
93      pytest

datasette-init/setup.py

30      entry_points={"datasette": ["init = datasette_init"]},
31      install_requires=["datasette>=0.45", "sqlite-utils"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-init[test]"],
34  )

datasette-insert-unsafe/README.md

31      pipenv shell
32  
33  Now install the dependencies and tests:
34  
35      pip install -e '.[test]'
36  
37  To run the tests:
38  
39      pytest

datasette-insert-unsafe/setup.py

30      entry_points={"datasette": ["insert_unsafe = datasette_insert_unsafe"]},
31      install_requires=["datasette", "datasette-insert>=0.6"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
33      tests_require=["datasette-insert-unsafe[test]"],
34  )

datasette-jq/setup.py

26      install_requires=["datasette", "pyjq", "six"],
27      extras_require={
28          "test": [
29              "pytest"
30          ]
31      },
32      tests_require=["datasette-jq[test]"],
33  )

datasette-jellyfish/README.md

10  Interactive demos:
11  
12  * [soundex, metaphone, nysiis, match_rating_codex comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++soundex%28%3As%29%2C+%0D%0A++++metaphone%28%3As%29%2C+%0D%0A++++nysiis%28%3As%29%2C+%0D%0A++++match_rating_codex%28%3As%29&s=demo).
13  * [distance functions comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++damerau_levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++hamming_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_winkler_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++match_rating_comparison%28%3As1%2C+%3As2%29%3B&s1=barrack+obama&s2=barrack+h+obama)
14  
15  Examples:
38          -- Outputs 1
39  
40  See [the Jellyfish documentation](https://jellyfish.readthedocs.io/en/latest/) for an explanation of each of these functions.

datasette-jellyfish/setup.py

30      entry_points={"datasette": ["jellyfish = datasette_jellyfish"]},
31      install_requires=["datasette", "jellyfish>=0.8.2"],
32      extras_require={"test": ["pytest"]},
33      tests_require=["datasette-jellyfish[test]"],
34  )

datasette-json-html/setup.py

30      entry_points={"datasette": ["json_html = datasette_json_html"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-json-html[test]"],
34  )

datasette-json-preview/README.md

22  ## Demos
23  
24  - https://latest-with-plugins.datasette.io/github/commits.json-preview
25  - https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=next_url
26  - https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=total
27  - https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=next_url&_extra=total
28  - https://latest-with-plugins.datasette.io/github/commits.json-preview?_extra=total&_size=0
29  
30  ## Development
40      pipenv shell
41  
42  Now install the dependencies and tests:
43  
44      pip install -e '.[test]'
45  
46  To run the tests:
47  
48      pytest

datasette-json-preview/setup.py

30      entry_points={"datasette": ["json_preview = datasette_json_preview"]},
31      install_requires=["datasette>=0.55"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
33      tests_require=["datasette-json-preview[test]"],
34  )

datasette-leaflet/setup.py

30      entry_points={"datasette": ["leaflet = datasette_leaflet"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-leaflet[test]"],
34      package_data={
35          "datasette_leaflet": [

datasette-leaflet-freedraw/README.md

59      pipenv shell
60  
61  Now install the dependencies and tests:
62  
63      pip install -e '.[test]'
64  
65  To run the tests:
66  
67      pytest

datasette-leaflet-geojson/setup.py

26      package_data={"datasette_leaflet_geojson": ["static/datasette-leaflet-geojson.js"]},
27      install_requires=["datasette>=0.54", "datasette-leaflet>=0.2"],
28      extras_require={"test": ["pytest", "pytest-asyncio"]},
29      tests_require=["datasette-leaflet-geojson[test]"],
30  )

datasette-leaflet-freedraw/setup.py

30      entry_points={"datasette": ["leaflet_freedraw = datasette_leaflet_freedraw"]},
31      install_requires=["datasette>=0.54", "datasette-leaflet>=0.2"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-leaflet-freedraw[test]"],
34      package_data={
35          "datasette_leaflet_freedraw": [

datasette-mask-columns/setup.py

25      entry_points={"datasette": ["mask_columns = datasette_mask_columns"]},
26      install_requires=["datasette"],
27      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
28      tests_require=["datasette-mask-columns[test]"],
29  )

datasette-media/setup.py

31      install_requires=["datasette>=0.44", "Pillow>=7.1.2", "httpx>=0.13.3"],
32      extras_require={
33          "test": [
34              "asgiref",
35              "pytest",
36              "pytest-asyncio",
37              "sqlite-utils",
38              "pytest-httpx>=0.4.0",
39          ],
40          "heif": ["pyheif>=0.4"],
41      },
42      tests_require=["datasette-media[test]"],
43  )

datasette-permissions-sql/README.md

15  ## Usage
16  
17  First, read up on how Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html) works.
18  
19  This plugin lets you define rules containing SQL queries that are executed to see if the currently authenticated actor has permission to perform certain actions.

datasette-permissions-sql/setup.py

25      entry_points={"datasette": ["permissions_sql = datasette_permissions_sql"]},
26      install_requires=["datasette>=0.44",],
27      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils~=2.0"]},
28      tests_require=["datasette-permissions-sql[test]"],
29  )

datasette-plugin/README.md

44      source venv/bin/activate
45      # Install dependencies so you can edit the plugin:
46      pip install -e '.[test]'
47      # With zsh you have to run this again for some reason:
48      source venv/bin/activate
62      ]
63  
64  You can run the default test for your plugin like so:
65  
66      pytest
67  
68  This will execute the test in `tests/test_my_new_plugin.py`, which confirms that the plugin has been installed.
69  
70  Now you can open the `datasette_my_new_plugin/__init__.py` file and start adding your [plugin hooks](https://docs.datasette.io/en/stable/plugin_hooks.html).
90      git push -u origin main
91  
92  The template will have created a GitHub Action which runs your plugin's test suite against every commit.
93  
94  ## Publishing your plugin as a package to PyPI

datasette-plugin/requirements.txt

1   cookiecutter
2   pytest
3   

datasette-plugin-demos/README.md

27      pipenv shell
28  
29  Now install the dependencies and tests:
30  
31      pip install -e '.[test]'
32  
33  To run the tests:
34  
35      pytest

datasette-plugin-demos/setup.py

16      },
17      install_requires=["datasette"],
18      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
19      tests_require=["datasette-plugin-demos[test]"],
20  )

datasette-placekey/README.md

43      pipenv shell
44  
45  Now install the dependencies and tests:
46  
47      pip install -e '.[test]'
48  
49  To run the tests:
50  
51      pytest

datasette-placekey/setup.py

31      entry_points={"datasette": ["placekey = datasette_placekey"]},
32      install_requires=["datasette", "placekey"],
33      extras_require={"test": ["pytest", "pytest-asyncio"]},
34      tests_require=["datasette-placekey[test]"],
35      python_requires=">=3.6",
36  )

datasette-plugin-template-demo/README.md

30      pipenv shell
31  
32  Now install the dependencies and test dependencies:
33  
34      pip install -e '.[test]'
35  
36  To run the tests:
37  
38      pytest

datasette-plugin-template-demo/setup.py

30      entry_points={"datasette": ["plugin_template_demo = datasette_plugin_template_demo"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-plugin-template-demo[test]"],
34      package_data={
35          "datasette_plugin_template_demo": ["static/*", "templates/*"]

datasette-pretty-json/setup.py

25      entry_points={"datasette": ["pretty_json = datasette_pretty_json"]},
26      install_requires=["datasette"],
27      extras_require={"test": ["pytest"]},
28  )

datasette-psutil/setup.py

25      entry_points={"datasette": ["psutil = datasette_psutil"]},
26      install_requires=["datasette>=0.44", "psutil"],
27      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
28      tests_require=["datasette-psutil[test]"],
29  )

datasette-publish-azure/README.md

34      pipenv shell
35  
36  Now install the dependencies and tests:
37  
38      pip install -e '.[test]'
39  
40  To run the tests:
41  
42      pytest

datasette-publish-azure/setup.py

30      entry_points={"datasette": ["publish_azure = datasette_publish_azure"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-publish-azure[test]"],
34      package_data={"datasette_publish_azure": ["static/*", "templates/*"]},
35      python_requires=">=3.6",

datasette-publish-fly/setup.py

30      entry_points={"datasette": ["publish_fly = datasette_publish_fly"]},
31      install_requires=["datasette>=0.44"],
32      extras_require={"test": ["pytest"]},
33      tests_require=["datasette-publish-fly[test]"],
34  )

datasette-publish-vercel/setup.py

30      entry_points={"datasette": ["publish_vercel = datasette_publish_vercel"]},
31      install_requires=["datasette>=0.52"],
32      extras_require={"test": ["pytest"]},
33      tests_require=["datasette-publish-vercel[test]"],
34  )

datasette-registry/requirements.txt

1   conformity
2   pytest
3   requests
4   datasette

datasette-registry/test_registry.py

12  
13  
14  def test_registry():
15      data = json.load(open('registry.json'))
16      assert [] == instances.errors(data)

datasette-render-binary/setup.py

25      entry_points={"datasette": ["render_binary = datasette_render_binary"]},
26      install_requires=["datasette", "filetype"],
27      extras_require={"test": ["pytest"]},
28      tests_require=["datasette-render-binary[test]"],
29  )

datasette-render-html/test_datasette_render_html.py

2   from datasette.app import Datasette
3   from markupsafe import Markup
4   import pytest
5   
6   
7   @pytest.fixture
8   def configured_datasette():
9       return Datasette(
25  
26  
27  def test_leaves_regular_columns_alone(configured_datasette):
28      assert None == render_cell(
29          "a<b", "not_definition", "glossary", "docs", configured_datasette
31  
32  
33  def test_marks_configured_column_safe(configured_datasette):
34      assert Markup("a<b") == render_cell(
35          "a<b", "definition", "glossary", "docs", configured_datasette
37  
38  
39  def test_none_becomes_blank_string(configured_datasette):
40      assert Markup("") == render_cell(
41          None, "definition", "glossary", "docs", configured_datasette

datasette-render-html/setup.py

25      entry_points={"datasette": ["render_html = datasette_render_html"]},
26      install_requires=["datasette"],
27      extras_require={"test": ["pytest"]},
28  )

datasette-render-images/setup.py

25      entry_points={"datasette": ["render_images = datasette_render_images"]},
26      install_requires=["datasette"],
27      extras_require={"test": ["pytest"]},
28      tests_require=["datasette-auth-tokens[test]"],
29  )

datasette-render-images/test_datasette_render_images.py

2   from datasette.app import Datasette
3   import jinja2
4   import pytest
5   
6   GIF_1x1 = b"GIF89a\x01\x00\x01\x00\x80\x00\x00\x00\x00\x00\xff\xff\xff!\xf9\x04\x01\x00\x00\x00\x00,\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x01D\x00;"
7   PNG_1x1 = b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08\x00\x00\x00\x00:~\x9bU\x00\x00\x00\nIDATx\x9cc\xfa\x0f\x00\x01\x05\x01\x02\xcf\xa0.\xcd\x00\x00\x00\x00IEND\xaeB`\x82"
8   # https://github.com/python/cpython/blob/master/Lib/test/imghdrdata/python.jpg
9   JPEG = b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00\x01\x00\x01\x00\x00\xff\xdb\x00C\x00\x03\x02\x02\x02\x02\x02\x03\x02\x02\x02\x03\x03\x03\x03\x04\x06\x04\x04\x04\x04\x04\x08\x06\x06\x05\x06\t\x08\n\n\t\x08\t\t\n\x0c\x0f\x0c\n\x0b\x0e\x0b\t\t\r\x11\r\x0e\x0f\x10\x10\x11\x10\n\x0c\x12\x13\x12\x10\x13\x0f\x10\x10\x10\xff\xdb\x00C\x01\x03\x03\x03\x04\x03\x04\x08\x04\x04\x08\x10\x0b\t\x0b\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\xff\xc0\x00\x11\x08\x00\x10\x00\x10\x03\x01"\x00\x02\x11\x01\x03\x11\x01\xff\xc4\x00\x16\x00\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07\x04\x05\xff\xc4\x00$\x10\x00\x01\x04\x01\x04\x02\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x06\x05\x07\x08\x12\x13\x11"\x00\x14\t12\xff\xc4\x00\x15\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\xff\xc4\x00#\x11\x00\x01\x02\x05\x03\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x11\x03\x04\x05\x06!\x00\x121\x15\x16a\x81\xe1\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00\x14\xa6\xd2j\x1bs\xc1\xe6\x13\x12\xd4\x95\x1c\xf3\x11c\xe4%e\xbe\xbaZ\xeciE@\xb1\xe5 \xb2T\xa5\x1f\xd2\xca\xb8\xfa\xf2 \xab\x96=\x97l\x935\xe6\x9bw\xd7\xe6m\xa7\x17\x81\xa5W\x1c\x7f\x1c\xeaq\xe2K9\xd7\xe3"S\xf2\x1ai\xde\xd4qJ8\xb4\x82\xe8K\x89*qi\x1e\xcd-!;\xf1\xef\xb9\x1at\xac\xee\xa1Zu\x8e\xd5H\xace[\x85\x8b\x81\x85{!)\x98g\xa9k\x94\xb9IeO\xb9\xc8\x85)\x11K\x81*\xf0z\xd9\xf2<\x80~U\xbe\r\xf6b\xa1@\xcc\xe8\xe6\x9a=\\\xb7C\xb3\xd7zeX\xb1\xd9Q!\x88\xbfd\xb8\xd3\xf1\xc3h\x04)\xc0\xd0\xfe\xbb<\x02\xe0<T\x07\xb4\xbd\xd9{T\xe6\'\xfbn\xdf\x94`\x14\x82b\x13\x8d\xb8R\x98(7\x05\x89ry`\xe42\x89o\xc3\x82\x8e\xa7R\x8c\xea \x8d\xbex\x19\x1f\x07\xad\x7f\xff\xd9'
10  
11  
12  @pytest.mark.parametrize(
13      "input,expected",
14      [
36      ],
37  )
38  def test_render_cell(input, expected):
39      actual = render_cell(input, None)
40      assert expected == actual
42  
43  
44  def test_render_cell_maximum_image_size():
45      max_length = 100 * 1024
46      max_image = GIF_1x1 + (b"b" * (max_length - len(GIF_1x1)))
52  
53  
54  def test_render_cell_different_size_limit():
55      max_length = 100 * 1024
56      max_image_plus_one = GIF_1x1 + (b"b" * (max_length - len(GIF_1x1))) + b"b"

datasette-render-markdown/README.md

110 You can configure support for extensions using the `"extensions"` key in your plugin metadata configuration.
111 
112 Since extensions may introduce new HTML tags, you will also need to add those tags to the list of tags that are allowed by the [Bleach](https://bleach.readthedocs.io/) sanitizer. You can do that using the `"extra_tags"` key, and you can whitelist additional HTML attributes using `"extra_attrs"`. See [the Bleach documentation](https://bleach.readthedocs.io/en/latest/clean.html#allowed-tags-tags) for more information on this.
113 
114 Here's how to enable support for [Markdown tables](https://python-markdown.github.io/extensions/tables/):

datasette-render-markdown/setup.py

25      entry_points={"datasette": ["render_markdown = datasette_render_markdown"]},
26      install_requires=["datasette", "markdown", "bleach"],
27      extras_require={"test": ["pytest", "pytest-asyncio"]},
28      tests_require=["datasette-render-markdown[test]"],
29  )

datasette-render-timestamps/setup.py

25      entry_points={"datasette": ["render_timestamps = datasette_render_timestamps"]},
26      install_requires=["datasette"],
27      extras_require={"test": ["pytest"]},
28      tests_require=["datasette-render-timestamps[test]"],
29  )

datasette-ripgrep/README.md

18  - [with.\*AsyncClient glob=datasette/\*\*](https://ripgrep.datasette.io/-/ripgrep?pattern=with.*AsyncClient&glob=datasette%2F%2A%2A) - search for that pattern only within the `datasette/` top folder
19  - ["sqlite-utils\[">\] glob=setup.py](https://ripgrep.datasette.io/-/ripgrep?pattern=%22sqlite-utils%5B%22%3E%5D&glob=setup.py) - a regular expression search for packages that depend on either `sqlite-utils` or `sqlite-utils>=some-version`
20  - [test glob=!\*.html](https://ripgrep.datasette.io/-/ripgrep?pattern=test&glob=%21*.html) - search for the string `test` but exclude results in HTML files
21  
22  ## Installation
65      pipenv shell
66  
67  Now install the dependencies and tests:
68  
69      pip install -e '.[test]'
70  
71  To run the tests:
72  
73      pytest

datasette-ripgrep/setup.py

31      package_data={"datasette_ripgrep": ["templates/*.html"]},
32      install_requires=["datasette"],
33      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
34      tests_require=["datasette-ripgrep[test]"],
35      python_requires=">=3.6",
36  )

datasette-rure/README.md

13  The plugin is built on top of the [rure-python](https://github.com/davidblewett/rure-python) library by David Blewett.
14  
15  ## regexp() to test regular expressions
16  
17  You can test if a value matches a regular expression like this:
18  
19      select regexp('hi.*there', 'hi there')

datasette-rure/setup.py

25      entry_points={"datasette": ["rure = datasette_rure"]},
26      install_requires=["datasette", "rure"],
27      extras_require={"test": ["pytest"]},
28      tests_require=["datasette-rure[test]"],
29  )

datasette-saved-queries/README.md

33      pipenv shell
34  
35  Now install the dependencies and tests:
36  
37      pip install -e '.[test]'
38  
39  To run the tests:
40  
41      pytest

datasette-saved-queries/setup.py

30      entry_points={"datasette": ["saved_queries = datasette_saved_queries"]},
31      install_requires=["datasette>=0.45", "sqlite-utils"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
33      tests_require=["datasette-saved-queries[test]"],
34  )

datasette-schema-versions/setup.py

30      entry_points={"datasette": ["schema_versions = datasette_schema_versions"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
33      tests_require=["datasette-schema-versions[test]"],
34  )

datasette-seaborn/README.md

48      pipenv shell
49  
50  Now install the dependencies and tests:
51  
52      pip install -e '.[test]'
53  
54  To run the tests:
55  
56      pytest

datasette-seaborn/setup.py

30      entry_points={"datasette": ["seaborn = datasette_seaborn"]},
31      install_requires=["datasette>=0.50", "seaborn>=0.11.0"],
32      extras_require={"test": ["pytest", "pytest-asyncio"]},
33      tests_require=["datasette-seaborn[test]"],
34  )

datasette-search-all/setup.py

31      package_data={"datasette_search_all": ["templates/*.html"]},
32      install_requires=["datasette>=0.51"],
33      extras_require={"test": ["pytest", "pytest-asyncio", "sqlite-utils"]},
34      tests_require=["datasette-search-all[test]"],
35  )

datasette-sentry/setup.py

25      py_modules=["datasette_sentry"],
26      install_requires=["sentry-sdk"],
27      extras_require={"test": ["pytest", "datasette"]},
28      classifiers=[
29          "License :: OSI Approved :: Apache Software License",

datasette-sentry/test_datasette_sentry.py

13  
14  
15  def test_asgi_wrapper():
16      app = object()
17      wrapper = asgi_wrapper(FakeDatasette({"dsn": "https://demo@sentry.io/1234"}))
21  
22  
23  def test_not_wrapped_if_no_configuration():
24      app = object()
25      wrapper = asgi_wrapper(FakeDatasette({}))

datasette-show-errors/test_datasette_show_errors.py

2   
3   
4   def test_asgi_wrapper():
5       app = object()
6       wrapper = asgi_wrapper()

datasette-show-errors/setup.py

26      install_requires=["starlette", "datasette"],
27      extras_require={
28          "test": ["pytest"]
29      },
30  )

datasette-small/Dockerfile

7   
8   RUN pip install csvs-to-sqlite datasette
9   RUN wget -O data.csv "https://latest.datasette.io/fixtures/compound_three_primary_keys.csv?_stream=on&_size=max"
10  RUN csvs-to-sqlite data.csv data.db
11  RUN datasette inspect data.db --inspect-file inspect-data.json

datasette-template-sql/setup.py

30      entry_points={"datasette": ["template-sql = datasette_template_sql"]},
31      install_requires=["datasette>=0.54"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "sqlite-utils"]},
33      tests_require=["datasette-template-sql[test]"],
34      python_requires=">=3.6",
35  )

datasette-tiles/README.md

126     pipenv shell
127 
128 Now install the dependencies and tests:
129 
130     pip install -e '.[test]'
131 
132 To run the tests:
133 
134     pytest

datasette-tiles/setup.py

30      entry_points={"datasette": ["tiles = datasette_tiles"]},
31      install_requires=["datasette", "datasette-leaflet>=0.2.2"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "datasette-basemap>=0.2"]},
33      tests_require=["datasette-tiles[test]"],
34      package_data={"datasette_tiles": ["templates/*"]},
35      python_requires=">=3.6",

datasette-upload-csvs/setup.py

33      ],
34      extras_require={
35          "test": ["pytest", "pytest-asyncio", "asgiref", "httpx", "asgi-lifespan"]
36      },
37      package_data={"datasette_upload_csvs": ["templates/*.html"]},

datasette-vega/README.md

9   ![Datasette Vega interface](https://raw.githubusercontent.com/simonw/datasette-vega/master/datasette-vega.png)
10  
11  Try out the latest master build as a live demo at https://datasette-vega-latest.datasette.io/ or try the latest release installed as a plugin at https://fivethirtyeight.datasettes.com/
12  
13  To add this to your Datasette installation, install the plugin like so:

datasette-vega/package.json

14      "start": "REACT_APP_STAGE=dev react-scripts start",
15      "build": "react-scripts build",
16      "test": "react-scripts test --env=jsdom",
17      "eject": "react-scripts eject"
18    }

datasette-vega/setup.py

69      },
70      install_requires=["datasette"],
71      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
72  )

datasette-write/README.md

33      pipenv shell
34  
35  Now install the dependencies and tests:
36  
37      pip install -e '.[test]'
38  
39  To run the tests:
40  
41      pytest

datasette-write/setup.py

31      entry_points={"datasette": ["write = datasette_write"]},
32      install_requires=["datasette>=0.45"],
33      extras_require={"test": ["pytest", "pytest-asyncio", "httpx"]},
34      tests_require=["datasette-write[test]"],
35  )

datasette-yaml/README.md

22  ## Demo
23  
24  The plugin is running on [covid-19.datasettes.com](https://covid-19.datasettes.co/) - for example [/covid/latest_ny_times_counties_with_populations.yaml](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations.yaml)
25  
26  ## Development
36      pipenv shell
37  
38  Now install the dependencies and tests:
39  
40      pip install -e '.[test]'
41  
42  To run the tests:
43  
44      pytest

datasette-yaml/setup.py

30      entry_points={"datasette": ["yaml = datasette_yaml"]},
31      install_requires=["datasette"],
32      extras_require={"test": ["pytest", "pytest-asyncio", "httpx", "sqlite-utils"]},
33      tests_require=["datasette-yaml[test]"],
34  )

datasette.io/README.md

28      scripts/build.sh
29  
30  Then to run the tests (which check that certain pages do not return errors):
31  
32      scripts/test.sh
33  
34  To see the site in your browser:

datasette.io/build_directory.py

76      node["topics"] = [n["topic"]["name"] for n in repository_topics["nodes"]]
77      default_branch_ref = node.pop("defaultBranchRef")
78      node["latest_commit"] = default_branch_ref["target"]["oid"]
79      return node, releases["nodes"]
80  
113     db = sqlite_utils.Database(db_filename)
114     repos_to_fetch_releases_for = {"simonw/datasette"}
115     if "latest_commit" not in db["datasette_repos"].columns_dict:
116         previous_hashes = {
117             row["nameWithOwner"]: None for row in db["datasette_repos"].rows
119     else:
120         previous_hashes = {
121             row["nameWithOwner"]: row["latest_commit"]
122             for row in db["datasette_repos"].rows
123         }
160     for row in db["datasette_repos"].rows:
161         if (
162             row["latest_commit"] != previous_hashes.get(row["nameWithOwner"])
163             or force_fetch_readmes
164         ):
187   repos.stargazers_count,
188   releases.tag_name,
189   max(releases.created_at) as latest_release_at,
190   repos.created_at as created_at,
191   datasette_repos.openGraphImageUrl,
231   repos.id
232 order by
233   latest_release_at desc
234 """.format(
235                 repo_table=repo_table

datasette.io/news.yaml

1   - date: 2021-06-05
2     body: |-
3       [Datasette 0.57](https://docs.datasette.io/en/stable/changelog.html#v0-57) is out with an important [security patch](https://github.com/simonw/datasette/security/advisories/GHSA-xw7c-jx9m-xh5g) plus a number of new features and bug fixes. Datasette 0.56.1, also out today, provides the security patch for users who are not yet ready to upgrade to the latest version.
4   - date: 2021-05-10
5     body: |-

db-to-sqlite/README.md

128     pipenv shell
129 
130 Now install the dependencies and test dependencies:
131 
132     pip install -e '.[test]'
133 
134 To run the tests:
135 
136     pytest
137 
138 This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.
139 
140 You can install those extra dependencies like so:
141 
142     pip install -e '.[test_mysql,test_postgresql]'
143 
144 You can alternative use `pip install psycopg2-binary` if you cannot install the `psycopg2` dependency used by the `test_postgresql` extra.
145 
146 See [Running a MySQL server using Homebrew](https://til.simonwillison.net/homebrew/mysql-homebrew) for tips on running the tests against MySQL on macOS, including how to install the `mysqlclient` dependency.
147 
148 The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:
149 
150 - `MYSQL_TEST_DB_CONNECTION` - defaults to `mysql://root@localhost/test_db_to_sqlite`
151 - `POSTGRESQL_TEST_DB_CONNECTION` - defaults to `postgresql://localhost/test_db_to_sqlite`
152 
153 The database you indicate in the environment variable - `test_db_to_sqlite` by default - will be deleted and recreated on every test run.

db-to-sqlite/setup.py

26      install_requires=["sqlalchemy", "sqlite-utils>=2.9.1", "click"],
27      extras_require={
28          "test": ["pytest"],
29          "test_mysql": ["pytest", "mysqlclient"],
30          "test_postgresql": ["pytest", "psycopg2"],
31          "mysql": ["mysqlclient"],
32          "postgresql": ["psycopg2"],
33      },
34      tests_require=["db-to-sqlite[test]"],
35      entry_points="""
36          [console_scripts]

django-sql-dashboard/README.md

4   [![Changelog](https://img.shields.io/github/v/release/simonw/django-sql-dashboard?include_prereleases&label=changelog)](https://github.com/simonw/django-sql-dashboard/releases)
5   [![Tests](https://github.com/simonw/django-sql-dashboard/workflows/Test/badge.svg)](https://github.com/simonw/django-sql-dashboard/actions?query=workflow%3ATest)
6   [![Documentation Status](https://readthedocs.org/projects/django-sql-dashboard/badge/?version=latest)](http://django-sql-dashboard.datasette.io/en/latest/?badge=latest)
7   [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/django-sql-dashboard/blob/main/LICENSE)
8   
17  - Safely run read-only one or more SQL queries against your database and view the results in your browser
18  - Bookmark queries and share those links with other members of your team
19  - Create [saved dashboards](https://django-sql-dashboard.datasette.io/en/latest/saved-dashboards.html) from your queries, with full control over who can view and edit them
20  - [Named parameters](https://django-sql-dashboard.datasette.io/en/latest/sql.html#sql-parameters) such as `select * from entries where id = %(id)s` will be turned into form fields, allowing quick creation of interactive dashboards
21  - Produce [bar charts](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#bar-label-bar-quantity), [progress bars](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#total-count-completed-count) and more from SQL queries, with the ability to easily create new [custom dashboard widgets](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#custom-widgets) using the Django template system
22  - Write SQL queries that safely construct and render [markdown](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#markdown) and [HTML](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#html)
23  - Export the full results of a SQL query as a downloadable CSV or TSV file, using a combination of Django's [streaming HTTP response](https://docs.djangoproject.com/en/3.2/ref/request-response/#django.http.StreamingHttpResponse) mechanism and PostgreSQL [server-side cursors](https://www.psycopg.org/docs/usage.html#server-side-cursors) to efficiently stream large amounts of data without running out of resources
24  - Copy and paste the results of SQL queries directly into tools such as Google Sheets or Excel

django-sql-dashboard/conftest.py

1   import pytest
2   from django.contrib.auth.models import Permission
3   
5   
6   
7   @pytest.fixture
8   def dashboard_db(settings, db):
9       settings.DATABASES["dashboard"]["OPTIONS"] = {
12  
13  
14  @pytest.fixture
15  def execute_sql_permission():
16      return Permission.objects.get(
21  
22  
23  @pytest.fixture
24  def saved_dashboard(dashboard_db):
25      dashboard = Dashboard.objects.create(
26          slug="test",
27          title="Test dashboard",
28          description="This [supports markdown](http://example.com/)",

django-sql-dashboard/pytest.ini

1   [pytest]
2   addopts = -p pytest_use_postgresql
3   DJANGO_SETTINGS_MODULE = config.settings
4   site_dirs = test_project/

django-sql-dashboard/pytest_use_postgresql.py

1   import os
2   
3   import pytest
4   from dj_database_url import parse
5   from django.conf import settings
6   from testing.postgresql import Postgresql
7   
8   _POSTGRESQL = Postgresql()
9   
10  
11  @pytest.hookimpl(tryfirst=True)
12  def pytest_load_initial_conftests(early_config, parser, args):
13      os.environ["DJANGO_SETTINGS_MODULE"] = early_config.getini("DJANGO_SETTINGS_MODULE")
14      settings.DATABASES["default"] = parse(_POSTGRESQL.url())
16  
17  
18  def pytest_unconfigure(config):
19      _POSTGRESQL.stop()

django-sql-dashboard/setup.py

40      install_requires=["Django>=3.0", "markdown", "bleach"],
41      extras_require={
42          "test": [
43              "black==21.5b2",
44              "psycopg2",
45              "pytest",
46              "pytest-django==4.2.0",
47              "pytest-pythonpath",
48              "dj-database-url",
49              "testing.postgresql",
50              "beautifulsoup4",
51              "html5lib",
52          ],
53      },
54      tests_require=["django-sql-dashboard[test]"],
55      python_requires=">=3.6",
56  )

dogsheep-beta/README.md

191     pipenv shell
192 
193 Now install the dependencies and tests:
194 
195     pip install -e '.[test]'
196 
197 To run the tests:
198 
199     pytest

dogsheep-beta/setup.py

35      install_requires=["datasette>=0.50.2", "click", "PyYAML", "sqlite-utils>=3.0"],
36      extras_require={
37          "test": ["pytest", "pytest-asyncio", "httpx", "beautifulsoup4", "html5lib"]
38      },
39      tests_require=["dogsheep-beta[test]"],
40  )

dogsheep-photos/setup.py

32          "osxphotos>=0.28.13 ; sys_platform=='darwin'",
33      ],
34      extras_require={"test": ["pytest"]},
35      tests_require=["dogsheep-photos[test]"],
36  )

evernote-to-sqlite/setup.py

33      """,
34      install_requires=["click", "sqlite-utils>=3.0"],
35      extras_require={"test": ["pytest"]},
36      tests_require=["evernote-to-sqlite[test]"],
37  )

evernote-to-sqlite/README.md

44      pipenv shell
45  
46  Now install the dependencies and tests:
47  
48      pip install -e '.[test]'
49  
50  To run the tests:
51  
52      pytest

fara-datasette/README.md

5   > This repository has been replaced by https://github.com/simonw/fara-history
6   
7   This code pulls the latest CSVs from https://efile.fara.gov/ords/f?p=API:BULKDATA and loads them into a SQLite database suitable for publishing using [Datasette](https://datasette.readthedocs.io/)
8   
9   

fec-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils", "click", "requests", "fecfile", "tqdm"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["fec-to-sqlite[test]"],
32  )

genome-to-sqlite/README.md

13  ## How to use
14  
15  First, export your genome. This tool has only been tested against 23andMe so far. You can request an export of your genome from https://you.23andme.com/tools/data/download/
16  
17  Now you can convert the resulting `export.zip` file to SQLite like so:

genome-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["genome-to-sqlite[test]"],
32  )

geojson-to-sqlite/README.md

69  For example, to load a day of earthquake reports from USGS:
70  
71      $ geojson-to-sqlite quakes.db quakes tests/quakes.ndjson --nl --pk=id --spatialite
72  
73  When using newline-delimited JSON, tables will also be created from the first feature, instead of guessing types based on the first 100 features.
74  
75  If you want to use a larger subset of your data to guess column types (for example, if some fields are inconsistent) you can use [fiona](https://fiona.readthedocs.io/en/latest/cli.html) to collect features into a single collection.
76  
77      $ head tests/quakes.ndjson | fio collect | geojson-to-sqlite quakes.db quakes - --spatialite
78  
79  This will take the first 10 lines from `tests/quakes.ndjson`, pass them to `fio collect`, which turns them into a single feature collection, and pass that, in turn, to `geojson-to-sqlite`.
80  
81  ## Using this with Datasette

geojson-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils>=2.2", "shapely"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["geojson-to-sqlite[test]"],
32  )

github-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils>=2.7.2", "requests", "PyYAML"],
30      extras_require={"test": ["pytest", "requests-mock", "bs4"]},
31      tests_require=["github-to-sqlite[test]"],
32  )

global-power-plants-datasette/README.md

1   # global-power-plants-datasette
2   
3   [![Fetch latest data and deploy with Datasette](https://github.com/simonw/global-power-plants-datasette/workflows/Fetch%20latest%20data%20and%20deploy%20with%20Datasette/badge.svg)](https://github.com/simonw/global-power-plants-datasette/actions?query=workflow%3A%22Fetch+latest+data+and+deploy+with+Datasette%22)
4   
5   Global power plants from https://github.com/wri/global-power-plant-database

google-takeout-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils~=1.11"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["google-takeout-to-sqlite[test]"],
32  )

hacker-news-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils", "click", "requests", "tqdm"],
30      extras_require={"test": ["pytest", "requests-mock"]},
31      tests_require=["hacker-news-to-sqlite[test]"],
32  )

healthkit-to-sqlite/setup.py

33      """,
34      install_requires=["sqlite-utils>=2.4.4"],
35      extras_require={"test": ["pytest"]},
36      tests_require=["healthkit-to-sqlite[test]"],
37  )

iam-to-sqlite/README.md

14      $ pip install iam-to-sqlite
15  
16  You will also need the `aws` command-line tool. Here are [the installation instructions](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) for that.
17  
18  ## Usage
41      pipenv shell
42  
43  Now install the dependencies and tests:
44  
45      pip install -e '.[test]'
46  
47  To run the tests:
48  
49      pytest

iam-to-sqlite/setup.py

33      """,
34      install_requires=["click", "sqlite-utils"],
35      extras_require={"test": ["pytest"]},
36      tests_require=["iam-to-sqlite[test]"],
37      python_requires=">=3.6",
38  )

inaturalist-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils>=2.0", "click", "requests"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["inaturalist-to-sqlite[test]"],
32  )

markdown-to-sqlite/setup.cfg

1   [aliases]
2   test=pytest

markdown-to-sqlite/setup.py

24      packages=find_packages(),
25      install_requires=["yamldown", "markdown", "sqlite-utils", "click"],
26      extras_require={"test": ["pytest"]},
27      tests_require=["markdown-to-sqlite[test]"],
28      setup_requires=["pytest-runner"],
29      entry_points="""
30          [console_scripts]

museums/museums.yaml

1134    author: Chris Plante
1135    date: 2016-06-29
1136  - url: https://www.mercurynews.com/2011/02/03/tests-shine-light-on-the-secret-of-the-livermore-light-bulb/
1137    title: "Tests shine light on the secret of the Livermore light bulb"
1138    publication: The Mercury News
1996    A teetotaler in public, Griffith was secretly a drunk. He grew increasingly paranoid and in 1903 forced his wife Tina at gunpoint to swear on a prayer book that she was faithful to the marriage and was not involved in a perceived attempt to poison him.
1997
1998    Despite her attestations, he shot her in the head. She survived but was blinded in her right eye. Griffith was arrested three days later and was sentenced to just two years in San Quentin. His lawyer had blamed the incident on “alcoholic insanity”.
1999
2000    In 1908, upon visiting Mount Wilson Observatory Griffith declared "If all mankind could look through that telescope, it would change the world." He offered the city money to build an observatory in 1912, but they refused, wary of further association with an attempted murderer.
2179  wikipedia_url: https://en.wikipedia.org/wiki/Donner_Party
2180  description: |-
2181    The Donner Party is notorious as the single greatest tragedy to occur during the American westward migration of the early 19th century.
2182
2183    32 members of the Donner and Reed families, plus their employees, set out west from Independence Missouri  on the 12th of May 1846, in nine wagons at the rear of a train of almost 500.
2397    Ray treated skulls as art, and Ray and his wife Alkmene were both keen artists. The living room displayed art and a number of skulls, but these merely hinted at what was to come.
2398
2399    As we descended deeper into the house Jacob explained that Ray had won the "grossest dead thing" halloween contest so many times that they competition had to forbid him from entering to give other contestants a chance. "Puss in Boots", a mummified cat wearing boots, was one of the winning entrants.
2400
2401    The concentration of skulls on display continued to increase, but nothing could prepare us for Ray's basement.

pocket-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils>=2.4.4", "click", "requests"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["pocket-to-sqlite[test]"],
32  )

register-of-members-interests-datasette/README.md

5   Code for parsing the mySociety Registers of Members Interest XML, turning it into SQLite and publishing it with Datasette.
6   
7   Fetches and publishes the latest data once every 24 hours, using GitHub Actions.
8   
9   Data source is http://data.mysociety.org/datasets/members-interest/

shapefile-to-sqlite/setup.py

28      """,
29      install_requires=["sqlite-utils>=2.2", "Shapely", "Fiona", "pyproj"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["shapefile-to-sqlite[test]"],
32  )

shapefile-to-sqlite/pytest.ini

1   [pytest]
2   filterwarnings =
3       ignore:.*the imp module is deprecated.*:DeprecationWarning

sphinx-to-sqlite/README.md

10  ## Demo
11  
12  You can see the results of running this tool against the [Datasette documentation](https://docs.datasette.io/) at https://latest-docs.datasette.io/docs/sections
13  
14  ## Installation
40      pipenv shell
41  
42  Now install the dependencies and tests:
43  
44      pip install -e '.[test]'
45  
46  To run the tests:
47  
48      pytest

sphinx-to-sqlite/setup.py

33      """,
34      install_requires=["click", "sqlite-utils"],
35      extras_require={"test": ["pytest"]},
36      tests_require=["sphinx-to-sqlite[test]"],
37  )

sqlite-diffable/setup.py

22      version=VERSION,
23      license="Apache License, Version 2.0",
24      packages=find_packages(exclude="tests"),
25      install_requires=["click", "sqlite-utils"],
26      extras_require={"test": ["pytest", "black"]},
27      entry_points="""
28          [console_scripts]
29          sqlite-diffable=sqlite_diffable.cli:cli
30      """,
31      tests_require=["sqlite-diffable[test]"],
32      url="https://github.com/simonw/sqlite-diffable",
33      classifiers=[

sqlite-dump/README.md

52      pipenv shell
53  
54  Now install the dependencies and tests:
55  
56      pip install -e '.[test]'
57  
58  To run the tests:
59  
60      pytest

sqlite-dump/setup.py

29      packages=["sqlite_dump"],
30      install_requires=[],
31      extras_require={"test": ["pytest", "sqlite-utils"]},
32      tests_require=["sqlite-dump[test]"],
33  )

sqlite-fts4/setup.py

28      version=VERSION,
29      packages=["sqlite_fts4"],
30      extras_require={"test": ["pytest"]},
31      tests_require=["sqlite-fts4[test]"],
32  )

sqlite-generate/README.md

81      pipenv shell
82  
83  Now install the dependencies and tests:
84  
85      pip install -e '.[test]'
86  
87  To run the tests:
88  
89      pytest

sqlite-generate/setup.py

33      """,
34      install_requires=["click", "Faker", "sqlite-utils"],
35      extras_require={"test": ["pytest"]},
36      tests_require=["sqlite-generate[test]"],
37  )

sqlite-transform/setup.py

28      """,
29      install_requires=["dateutils", "tqdm", "click"],
30      extras_require={"test": ["pytest", "sqlite-utils"]},
31      tests_require=["sqlite-transform[test]"],
32  )

sqlite-utils/MANIFEST.in

2   include README.md
3   recursive-include docs *.rst
4   recursive-include tests *.py

sqlite-utils/README.md

2   
3   [![PyPI](https://img.shields.io/pypi/v/sqlite-utils.svg)](https://pypi.org/project/sqlite-utils/)
4   [![Changelog](https://img.shields.io/github/v/release/simonw/sqlite-utils?include_prereleases&label=changelog)](https://sqlite-utils.datasette.io/en/latest/changelog.html)
5   [![Python 3.x](https://img.shields.io/pypi/pyversions/sqlite-utils.svg?logo=python&logoColor=white)](https://pypi.org/project/sqlite-utils/)
6   [![Tests](https://github.com/simonw/sqlite-utils/workflows/Test/badge.svg)](https://github.com/simonw/sqlite-utils/actions?query=workflow%3ATest)
7   [![Documentation Status](https://readthedocs.org/projects/sqlite-utils/badge/?version=latest)](http://sqlite-utils.datasette.io/en/latest/?badge=latest)
8   [![codecov](https://codecov.io/gh/simonw/sqlite-utils/branch/main/graph/badge.svg)](https://codecov.io/gh/simonw/sqlite-utils)
9   [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/sqlite-utils/blob/main/LICENSE)

sqlite-utils/setup.py

22      version=VERSION,
23      license="Apache License, Version 2.0",
24      packages=find_packages(exclude=["tests", "tests.*"]),
25      install_requires=["sqlite-fts4", "click", "click-default-group", "tabulate"],
26      setup_requires=["pytest-runner"],
27      extras_require={
28          "test": ["pytest", "black", "hypothesis"],
29          "docs": ["sphinx_rtd_theme", "sphinx-autobuild"],
30      },
33          sqlite-utils=sqlite_utils.cli:cli
34      """,
35      tests_require=["sqlite-utils[test]"],
36      url="https://github.com/simonw/sqlite-utils",
37      project_urls={

srccon-2020-datasette/README.md

5   In this repository:
6