Commit 7b741c67 authored by 20after4's avatar 20after4
Browse files

Jinja dashboards

parent 27f5a95e
__pycache__ __pycache__
.mypy_cache .mypy_cache
.pytest_cache
*.csv *.csv
*.pyc *.pyc
.ipynb_checkpoints .ipynb_checkpoints
.spyproject .spyproject
.vscode .vscode
cache.db
*.egg-info *.egg-info
*.db
/.eggs /.eggs
/dist /dist
*.db /.venv
/.venv/*
/test/*.json /test/*.json
/ddd.code-workspace /ddd.code-workspace
/ignore/* /ignore/*
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
## Status ## Status
This tool an supporting libraries are in early stages of experimentation and This tool an supporting libraries are in early stages of experimentation and
development. The APIs are not yet stable and the featureset is not yet decoded development. The APIs are not yet stable and the featureset is not yet decided
let alone completely implemented. Stay tuned or get involved. let alone completely implemented. Stay tuned or get involved.
## Currently supported data sources: ## Currently supported data sources:
...@@ -21,7 +21,7 @@ let alone completely implemented. Stay tuned or get involved. ...@@ -21,7 +21,7 @@ let alone completely implemented. Stay tuned or get involved.
# Usage # Usage
## cli ## Installation
setup.py will install a command line tool called `dddcli` setup.py will install a command line tool called `dddcli`
...@@ -31,17 +31,18 @@ To install for development use: ...@@ -31,17 +31,18 @@ To install for development use:
python3 setup.py develop python3 setup.py develop
``` ```
### dddcli
You can use the following sub-commands with `dddcli command [args]` to access various functionality. You can use the following sub-commands by running `dddcli sub-command [args]` to access various functionality.
### phabricator metrics ### Phabricator metrics: `dddcli metrics`
This tool is used to extract data from phabricator and organize it in a structure that will facilitate further analysis. This tool is used to extract data from phabricator and organize it in a structure that will facilitate further analysis.
The analysis of task activities can provide some insight into workflows. The analysis of task activities can provide some insight into workflows.
The output if this tool will be used as the data source for charts to visualize certain agile project planning metrics. The output if this tool will be used as the data source for charts to visualize certain agile project planning metrics.
Example usage (this is rough and can be simplified with a bit more refinement.)
#### cache-columns
The first thing to do is cache the columns for the project you're interested in. The first thing to do is cache the columns for the project you're interested in.
This will speed up future actions because it avoids a lot of unnecessary requests This will speed up future actions because it avoids a lot of unnecessary requests
to Phabricator that would otherwise be required to resolve the names of projects to Phabricator that would otherwise be required to resolve the names of projects
...@@ -51,13 +52,15 @@ and workboard columns. ...@@ -51,13 +52,15 @@ and workboard columns.
dddcli metrics cache-columns --project=PHID-PROJ-uier7rukzszoewbhj7ja dddcli metrics cache-columns --project=PHID-PROJ-uier7rukzszoewbhj7ja
``` ```
Then you can fetch the actual metrics and map them into local sqlite tables: Then you can fetch the actual metrics and map them into local sqlite tables with the map sub-command:
```bash ```bash
dddcli metrics map --project=PHID-PROJ-uier7rukzszoewbhj7ja dddcli metrics map --project=PHID-PROJ-uier7rukzszoewbhj7ja
``` ```
Note that `--project` accepts either a `PHID` or a project `#hashtag`, so you can try `dddcli metrics map --project=#releng`, for example.
To get cli usage help, try To get cli usage help, try
```bash ```bash
...@@ -79,17 +82,42 @@ If you omit the --mock argument then it will request a rather large amount of da ...@@ -79,17 +82,42 @@ If you omit the --mock argument then it will request a rather large amount of da
To run datasette, from the ddd checkout: To run datasette, from the ddd checkout:
```bash ```bash
dddcli serve ./www export DATASETTE_PORT=8001
datasette --reload --metadata www/metadata.yaml -h 0.0.0.0 -p $DATASETTE_PORT www
``` ```
Sample systemd units are in `etc/systemd/*` including a file watcher to restart datasette Sample systemd units are in `etc/systemd/*` including a file watcher to restart datasette
when the data changes. when the data changes.
# Example code: # Example code:
## Conduit API client:
```python
from ddd.phab import Conduit
phab = Conduit()
# Call phabricator's meniphest.search api and retrieve all results
r = phab.request('maniphest.search', {'queryKey': "KpRagEN3fCBC",
"limit": "40",
"attachments": {
"projects": True,
"columns": True
}})
```
This fetches every page of results, note the API limits a single request to
fetching **at most** 100 objects, however, fetch_all will request each page from the server until all available records have been retrieved:
```python
r.fetch_all()
```
## PHIDRef ## PHIDRef
Whenever encountering a phabricator `phid`, we use PHIDRef objects to wrap the phid. This provides several conveniences Whenever encountering a phabricator `phid`, we use PHIDRef objects to wrap the phid. This provides several conveniences for working with phabricator objects efficiently. This interactive python session demonstrates how it works:
for working with phabricator objects efficiently. This interactive python session demonstrates how it works:
```python ```python
In [1]: phid = PHIDRef('PHID-PROJ-uier7rukzszoewbhj7ja') In [1]: phid = PHIDRef('PHID-PROJ-uier7rukzszoewbhj7ja')
...@@ -116,31 +144,9 @@ Out[7]: PHID-PROJ-uier7rukzszoewbhj7ja ...@@ -116,31 +144,9 @@ Out[7]: PHID-PROJ-uier7rukzszoewbhj7ja
``` ```
1. You can construct a bunch of PHIDRef instances and then later on you can fetch all of 1. You can construct a bunch of `PHIDRef` instances and then later on you can fetch all of the data in a single call to phabricator's conduit api. This is accomplished by calling `PHObject.resolve_phids()`.
the data in a single call to `resolve_phids()`. 2. `resolve_phids()` can store a local cache of the phid details in the phobjects table. After calling resolve_phids completes, all `PHObject` instances will contain the `name`, `url` and `status` of the corresponding phabricator objects.
2. resolve_phids can store a local cache of the phid details in the phobjects table. 3. An instance of PHIDRef can be used transparently as a database key.
3. a PHIDRef can be used transparently as a database key. 4. `str(PHIDRef_instance)` returns the original `"PHID-TYPE-hash"` string.
* `str(PHIDRef_instance)` returns the original `"PHID-TYPE-hash"` string. 5. `PHIDRef_instance.object` returns an instantiated `PHObject` instance.
* `PHIDRef_instance.object` returns an instantiated `PHObject` instance.
* After calling `resolve_phids()`, all `PHObject` instances will contain the `name`,
`url` and `status` of the corresponding phabricator objects.
```python
from ddd.phab import Conduit
phab = Conduit()
# Call phabricator's meniphest.search api and retrieve all results
r = phab.request('maniphest.search', {'queryKey': "KpRagEN3fCBC",
"limit": "40",
"attachments": {
"projects": True,
"columns": True
}})
# This fetches every page of results, note the API limits a single request to
# fetching at most 100 results (controlled by the limit argument)
# But fetch_all will request each page from the server until all available
# records have been retrieved.
r.fetch_all()
```
...@@ -5,7 +5,7 @@ After=network.target ...@@ -5,7 +5,7 @@ After=network.target
[Service] [Service]
Type=simple Type=simple
WorkingDirectory=/srv/ddd WorkingDirectory=/srv/ddd
ExecStart=run-datasette.sh ExecStart=etc/ddd-datasette.sh
User=ddd User=ddd
Group=srv Group=srv
......
...@@ -42,4 +42,7 @@ conduit = "ddd.conduit_cli:cli" ...@@ -42,4 +42,7 @@ conduit = "ddd.conduit_cli:cli"
dddcli = "ddd.main:cli" dddcli = "ddd.main:cli"
[tool.pyright] [tool.pyright]
reportMissingTypeStubs = true reportMissingTypeStubs = true
\ No newline at end of file
[tool.mypy]
implicit_reexport = true
\ No newline at end of file
Subproject commit bf87331e1ec2e461ceff321870ba2564a7a5829d Subproject commit bf224aa117be3f9c882ce4de27b09f8fa3dccb8c
from rich.console import Console
console = Console(stderr=True)
...@@ -15,7 +15,7 @@ import click ...@@ -15,7 +15,7 @@ import click
from rich.console import Console from rich.console import Console
import typer import typer
from rich.status import Status from rich.status import Status
from sqlite_utils.db import Database,chunks from sqlite_utils.db import Database, chunks
from typer import Option, Typer from typer import Option, Typer
from ddd.boardmetrics_mapper import maptransactions from ddd.boardmetrics_mapper import maptransactions
...@@ -38,13 +38,13 @@ def cache_tasks(conduit, cache, tasks, sts): ...@@ -38,13 +38,13 @@ def cache_tasks(conduit, cache, tasks, sts):
new_instances = [] new_instances = []
for task in r.data: for task in r.data:
task.save() task.save()
#instance = PHObject.instance(phid=PHID(key), data=vals, save=True) # instance = PHObject.instance(phid=PHID(key), data=vals, save=True)
#new_instances.append(instance) # new_instances.append(instance)
#cache.store_all(r.data) # cache.store_all(r.data)
def cache_projects(conduit:Conduit, cache, sts): def cache_projects(conduit: Conduit, cache, sts):
r = conduit.project_search(constraints={"maxDepth": 2}) r = conduit.project_search(constraints={"maxDepth": 2})
r.fetch_all(sts) r.fetch_all(sts)
...@@ -54,6 +54,7 @@ def cache_projects(conduit:Conduit, cache, sts): ...@@ -54,6 +54,7 @@ def cache_projects(conduit:Conduit, cache, sts):
cache.store_all(r.data) cache.store_all(r.data)
@cli.command() @cli.command()
def cache_columns(ctx: typer.Context, project: str = Option("all")): def cache_columns(ctx: typer.Context, project: str = Option("all")):
""" """
...@@ -80,15 +81,13 @@ def cache_columns(ctx: typer.Context, project: str = Option("all")): ...@@ -80,15 +81,13 @@ def cache_columns(ctx: typer.Context, project: str = Option("all")):
for col in r.data: for col in r.data:
count += 1 count += 1
col.save() col.save()
#col.project.save() # col.project.save()
if round((count/total) * 100) > pct: if round((count / total) * 100) > pct:
pct = round((count/total) * 100) pct = round((count / total) * 100)
sts.update( sts.update(
f"Saved [bold green]{count}[/bold green] ([bold blue]{pct}%[/bold blue]) Project Columns." f"Saved [bold green]{count}[/bold green] ([bold blue]{pct}%[/bold blue]) Project Columns."
) )
config.console.log(f"Fetched & cached {count} Project Columns.") config.console.log(f"Fetched & cached {count} Project Columns.")
config.db.conn.commit() config.db.conn.commit()
config.console.log("Updating phobjects cache.") config.console.log("Updating phobjects cache.")
......
...@@ -39,7 +39,7 @@ def maptransactions( ...@@ -39,7 +39,7 @@ def maptransactions(
all_metrics = set() all_metrics = set()
@mapper("transactionType=core:edge", "meta.edge:type=41") @mapper("transactionType=core:edge", "meta.edge:type=41")
def projects(t, context:Task): def projects(t, context: Task):
""" """
edge transactions point to related objects such as subtasks, edge transactions point to related objects such as subtasks,
mentioned tasks and project tags. mentioned tasks and project tags.
...@@ -98,26 +98,26 @@ def maptransactions( ...@@ -98,26 +98,26 @@ def maptransactions(
return [("subtask_resolved", "global", t["taskID"], None)] return [("subtask_resolved", "global", t["taskID"], None)]
@mapper("transactionType=status") @mapper("transactionType=status")
def status(t, context:Task): def status(t, context: Task):
ts = int(t["dateCreated"]) ts = int(t["dateCreated"])
state = t["newValue"] state = t["newValue"]
if state in ("open", "stalled", "progress"): if state in ("open", "stalled", "progress"):
#for metric in context.metrics(is_started=False): # for metric in context.metrics(is_started=False):
# metric.start(state) # metric.start(state)
context.metric(key='status').start(ts, state) context.metric(key="status").start(ts, state)
elif state in ("declined", "resolved", "invalid"): elif state in ("declined", "resolved", "invalid"):
for metric in context.metrics(is_ended=False): for metric in context.metrics(is_ended=False):
metric.end(ts, state) metric.end(ts, state)
context.metric(key='status').end(ts, state) context.metric(key="status").end(ts, state)
return [("status", "global", t["oldValue"], t["newValue"])] return [("status", "global", t["oldValue"], t["newValue"])]
@mapper("transactionType=reassign") @mapper("transactionType=reassign")
def assign(t, context): def assign(t, context):
ts = int(t["dateCreated"]) ts = int(t["dateCreated"])
if t["oldValue"]: if t["oldValue"]:
context.metric(key='assign').val(t['oldValue']).end(ts, 'reassign') context.metric(key="assign").val(t["oldValue"]).end(ts, "reassign")
context.metric(key='assign').val(t['newValue']).start(ts, 'assign') context.metric(key="assign").val(t["newValue"]).start(ts, "assign")
return [("assign", "global", t["oldValue"], t["newValue"])] return [("assign", "global", t["oldValue"], t["newValue"])]
@mapper("transactionType=core:create") @mapper("transactionType=core:create")
...@@ -143,12 +143,14 @@ def maptransactions( ...@@ -143,12 +143,14 @@ def maptransactions(
res.append(("columns", ref, fromcol, tocol)) res.append(("columns", ref, fromcol, tocol))
if source or target: if source or target:
for i in ('fromPHID', 'toPHID'): for i in ("fromPHID", "toPHID"):
PHObject.instance(ref).metric(task=t["taskID"]).start(ts, tocol) PHObject.instance(ref).metric(task=t["taskID"]).start(ts, tocol)
srcphid = getattr(source, i, None) srcphid = getattr(source, i, None)
tophid = getattr(target, i, None) tophid = getattr(target, i, None)
if (srcphid and tophid): if srcphid and tophid:
PHObject.instance(ref).metric(task=t["taskID"]).start(ts, tophid) PHObject.instance(ref).metric(task=t["taskID"]).start(
ts, tophid
)
res.append(("milestone", ref, srcphid, tophid)) res.append(("milestone", ref, srcphid, tophid))
return res return res
......
...@@ -44,7 +44,9 @@ class Config: ...@@ -44,7 +44,9 @@ class Config:
--sql --sql
CREATE TABLE IF NOT EXISTS events(ts, task, project phid, user phid, event, old, new); CREATE TABLE IF NOT EXISTS events(ts, task, project phid, user phid, event, old, new);
--sql --sql
CREATE UNIQUE INDEX IF NOT EXISTS events_pk on events(ts, task, event); CREATE UNIQUE INDEX IF NOT EXISTS events_pk on events(ts, task, project, event, old, new);
--sql
CREATE INDEX IF NOT EXISTS events_project on events(event, project, old, new);
--sql --sql
DROP VIEW IF EXISTS view_column_metrics; DROP VIEW IF EXISTS view_column_metrics;
--sql --sql
......
...@@ -55,7 +55,7 @@ def main( ...@@ -55,7 +55,7 @@ def main(
register_sqlite_adaptors() register_sqlite_adaptors()
ctx.meta["db"] = Database(db) ctx.meta["db"] = Database(db)
PHObject.db = ctx.meta["db"] PHObject.db = ctx.meta["db"]
PHObject.conduit = ctx.meta['conduit']
@app.command() @app.command()
def request( def request(
...@@ -71,8 +71,8 @@ def request( ...@@ -71,8 +71,8 @@ def request(
with db.conn: with db.conn:
for project in cursor.result["data"]: for project in cursor.result["data"]:
project.save() project.save()
while 'parent' in project and project.parent: while "parent" in project and project.parent:
project=project.parent project = project.parent
project.save() project.save()
db.conn.commit() db.conn.commit()
......
...@@ -135,13 +135,9 @@ class Conduit(object): ...@@ -135,13 +135,9 @@ class Conduit(object):
"project.search", {"queryKey": queryKey, "constraints": constraints} "project.search", {"queryKey": queryKey, "constraints": constraints}
) )
def maniphest_search( def maniphest_search(self, constraints: MutableMapping = {}) -> Cursor:
self, constraints: MutableMapping = {}
) -> Cursor:
"""Find projects""" """Find projects"""
return self.request( return self.request("maniphest.search", {"constraints": constraints})
"maniphest.search", {"constraints": constraints}
)
def project_columns( def project_columns(
self, project: PHID = None, column_phids: Sequence = None self, project: PHID = None, column_phids: Sequence = None
......
...@@ -29,7 +29,7 @@ from typing import ( ...@@ -29,7 +29,7 @@ from typing import (
) )
from rich.console import Console from rich.console import Console
from sqlite_utils.db import Database,Table from sqlite_utils.db import Database, NotFoundError, Table
console = Console() console = Console()
...@@ -50,10 +50,13 @@ console = Console() ...@@ -50,10 +50,13 @@ console = Console()
""" """
class EmptyArg: class EmptyArg:
"""Sentinal Value""" """Sentinal Value"""
pass pass
class PHIDError(ValueError): class PHIDError(ValueError):
def __init__(self, msg): def __init__(self, msg):
self.msg = msg self.msg = msg
...@@ -164,7 +167,10 @@ def PHIDType(phid: PHID) -> Type[PHObject]: ...@@ -164,7 +167,10 @@ def PHIDType(phid: PHID) -> Type[PHObject]:
"""Find the class for a given PHID string. Returns a reference to the """Find the class for a given PHID string. Returns a reference to the
matching subclass or to PHObject when there is no match.""" matching subclass or to PHObject when there is no match."""
try: try:
parts = phid.split("-") if isinstance(phid, PHIDRef):
parts = phid.toPHID.split("-")
else:
parts = phid.split("-")
phidtype = parts[1] phidtype = parts[1]
if phidtype in PHIDTypes.__members__: if phidtype in PHIDTypes.__members__:
classtype = PHIDTypes[phidtype].value classtype = PHIDTypes[phidtype].value
...@@ -258,6 +264,7 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]): ...@@ -258,6 +264,7 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]):
This class handles caching and insures that there is at most one instance This class handles caching and insures that there is at most one instance
per unique phid. per unique phid.
""" """
_type_name = None _type_name = None
@classmethod @classmethod
...@@ -268,6 +275,7 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]): ...@@ -268,6 +275,7 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]):
return cls.__name__ return cls.__name__
db: ClassVar[Database] db: ClassVar[Database]
conduit: ClassVar[Conduit]
table: ClassVar[Table] table: ClassVar[Table]
savequeue: ClassVar[deque] = deque() savequeue: ClassVar[deque] = deque()
...@@ -314,19 +322,23 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]): ...@@ -314,19 +322,23 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]):
table.upsert(record, alter=True) table.upsert(record, alter=True)
def load(self): def load(self):
table = self.get_table() try:
data = table.get(self.phid) table = self.get_table()
self.update(data) data = table.get(self.phid)
self.update(data)
except NotFoundError as e:
console.log(e)
return self return self
@property @property
def loaded(self): def loaded(self) -> bool:
return len(self.__dict__.keys()) > 1 return len(self.__dict__.keys()) > 1
@classmethod @classmethod
def instance( def instance(
cls, phid: PHID, data: Optional[Mapping] = None, save: bool = False cls, phid: PHID, data: Optional[Mapping] = None, save: bool = False
) -> PhabObjectBase: ) -> PHObject:
obj = __class__.byid(phid) obj = __class__.byid(phid)
if not obj: if not obj:
phidtype = PHIDType(phid) phidtype = PHIDType(phid)
...@@ -338,10 +350,10 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]): ...@@ -338,10 +350,10 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]):
obj.update(data) obj.update(data)
if save: if save:
obj.save() obj.save()
return obj return obj #type:ignore
@classmethod @classmethod
def resolve_phids(cls, conduit, cache: Optional[DataCache] = None): def resolve_phids(cls, conduit: Optional[Conduit] = None, cache: Optional[DataCache] = None):
phids = {phid: True for phid in __class__.instances.keys()} phids = {phid: True for phid in __class__.instances.keys()}
if cache: if cache:
...@@ -353,6 +365,8 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]): ...@@ -353,6 +365,8 @@ class PHObject(PhabObjectBase, SubclassCache[PHID, PhabObjectBase]):
# no more phids to resolve. # no more phids to resolve.
return cls.instances return cls.instances
phids = [phid for phid in phids.keys()] phids = [phid for phid in phids.keys()]
if not conduit:
conduit = PHObject.conduit
res = conduit.raw_request(method="phid.query", args={"phids": phids}) res = conduit.raw_request(method="phid.query", args={"phids": phids})
objs = res.json()