You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be useful to be able to have pre-built schemas for common tool outputs, such as from Volatility modules, Eric Zimmerman's suite of tools, popular open-source forensics tools (AmcacheParser, appcompatprocessor.py, etc...) frameworks (like Kansa PowerShell IR Framework, etc...), and suites (Sysinternals Suite [Autorunsc.exe, Sysmon, etc...]). The schemas would allow for the forensic outputs to marry together on one graph database, which would be SUPER useful. Instead of endless spreadsheets to cipher through, an ability to aggregate the data into one common operating picture would take forensics analysis to another level.
Obviously, there would be a ton of links and nodes (and associated attributres!), but that is for the end user to figure out in terms of processing. I for one think that is a good problem to have and a tremendous step in the right direction.
The text was updated successfully, but these errors were encountered:
Integration and automation are definitely major goals for this project, so I love the idea. I'm not clear on the execution, though. It would be great if you could describe a more specific task, e.g. "ingest volatility output and map it to X,Y,Z elements in Attack Flow". Some examples of inputs and expected outputs would be useful. More details == more likely we can act on it.
It would be useful to be able to have pre-built schemas for common tool outputs, such as from Volatility modules, Eric Zimmerman's suite of tools, popular open-source forensics tools (AmcacheParser, appcompatprocessor.py, etc...) frameworks (like Kansa PowerShell IR Framework, etc...), and suites (Sysinternals Suite [Autorunsc.exe, Sysmon, etc...]). The schemas would allow for the forensic outputs to marry together on one graph database, which would be SUPER useful. Instead of endless spreadsheets to cipher through, an ability to aggregate the data into one common operating picture would take forensics analysis to another level.
Obviously, there would be a ton of links and nodes (and associated attributres!), but that is for the end user to figure out in terms of processing. I for one think that is a good problem to have and a tremendous step in the right direction.
The text was updated successfully, but these errors were encountered: