Replies: 14 comments 6 replies
-
|
For context, here are the persona descriptions: Aspiring Alex: Founder Frank: Open Source Oliver: |
Beta Was this translation helpful? Give feedback.
-
|
Here's what we came up with for the Mission and Vision statements: Vision (the future we want to achieve): A secure FOSS ecosystem, with few supply chain vulnerabilities and a highly-engaged cohort of FOSS Security Developers Mission (how we can get there): to secure the FOSS ecosystem supply chain by assisting with CVE discovery and patching while incentivizing engineers to engage with FOSS and AppSec Description statements: BLT is an OWASP-hosted, community-governed platform being refocused into a hybrid automated and human-driven discovery engine, designed to teach responsible security work while improving the Internet’s baseline security — with a clear open-core and fork-friendly future. Building security-aware contributors is the most durable way to improve the FOSS ecosystem. |
Beta Was this translation helpful? Give feedback.
-
|
The problem is definitely worth solving and the solution proposed here seems pretty adhesive and nice to me. A few things I can think of and am maybe a bit concerned about are -
Overall, this does feel like a good plan since we'll be creating/reflecting the thing that our product is supposed to be solving and then directing the contributors/agents into an adhesive community to grow, help and learn while sharing the growth with others. Maybe the UI/UX bug hunter sounds like noise to me in this for now, and a few things like maybe a full on pipeline for issue prioritization for big orgs, a stab at the decentralized security sector through this autonomous bug scanning agent and such stuff feel missing. This is all I could think of on the first few reads of the above ideas and direction. Looking forward to be a part of this! Happy to take up any of these points and elaborate or discuss more. |
Beta Was this translation helpful? Give feedback.
-
1. High level visionI do think this problem is worth solving in this way. In fact, I feel this might actually help BLT get back to having a clear “core” again. Right now there are many features and it’s sometimes hard for new users to understand what BLT mainly stands for. If the core becomes security + remediation + learning, that gives a much clearer picture of what BLT is about.
2. Vulnerability discoveryAt high level, I think automation / AI based scanning is necessary here, because manually checking codebases at scale is not really practical. So scanners + AI helping to find dependency issues or known CVEs makes sense as the first step.
3. UI/UX bug discoveryPersonally UI/UX bug discovery feels like noise compared to the new security-focused direction, at least as a main pillar. UI bugs are easier to find and usually don’t need strong incentives, and contributors already report these naturally. That being said, sometimes frontend issues can also become security issues (like vulnerable frontend dependencies, auth flow mistakes, etc.). So maybe instead of general UI/UX bugs, the focus could be more on frontend-related security risks (like the recent- critical security in react components) , not visual or usability problems. 4. Incentivizing fixesI agree with @sidd190 with this incentives help and good to help contributors motivation but my main concern is that before heavily incentivizing security fixes, we first need to make BACON / badges more credible and meaningful.
5. Education & contributor growthI feel education is very important for this vision, but it might be better to keep it in a separate repo or platform, so the core product doesn’t get too heavy. Right now we already have some security labs, but they are mostly MCQ-type questions. I think we first need to clearly decide what kind of education we actually want to provide to contributors and what level of security knowledge we expect them to have.
And maybe the best learning source could be BLT itself —if vulnerabilities are found during scanning of repos, those patterns (without exposing sensitive details) can be turned into learning material. So people who don’t understand what a certain vulnerability means can learn about it through education modules based on real findings. 6. Knowledge sharing & community impactFor patterns and trends, dashboards feel more useful than blogs. Blogs can explain BLT features, contributor stories, or learning resources, but for vulnerability patterns and common issues, dashboards or reports feel more appropriate and safer. |
Beta Was this translation helpful? Give feedback.
-
|
High-level Direction The overall direction is solid but we might be overlooking certain aspects and underlooking in another. I'll be breaking down my understanding of these ideas and approaches as we go. So the direction of folding BLT for what it was always meant to be, by focusing majorly on identifying security based issues → providing solutions for the issues → gamifying the entire process - feels RIGHT!! This 3-step process in and of itself is 1/2 part of the ENTIRE execution plan. Now, further breaking down the said "3-step process": Vulnerability Discovery (Buttercup)
So how often would we run Buttercup scans without it burning ourselves out?
Incentivising Fixes (Anti-gaming)
Leaderboards, Recognition, and Mentorship
Education (blt-education, Self-paced)
Also, something that I was thinking of at the top of my head...contributors of a specific rank and above on the leaderboard could join as teachers for beginners within the blt-education repo. This again leads us back to the goal of incentivising the entire 1/3 part of this year's goal and actually ensuring that the ranking is an accurate display of an individual's involvement within BLT community. Knowledge Sharing (Safe and Useful)
UI/UX Frontend Bug Reporting
Additional clarifications I want to make explicit (to avoid gaps)
Questions I'd like ALL OF YOU to weigh in on:
|
Beta Was this translation helpful? Give feedback.
-
|
My thoughts on “Incentivizing fixes” Really like the direction of a revamped BACON-style model focused on security-related PRs, and wanted to share a few thoughts on the three discussion questions.
Happy to expand on any of these, especially around how the automation + reputation layer could be designed so that maintainers keep control but aren’t overwhelmed by noise. |
Beta Was this translation helpful? Give feedback.
-
1. High-level VisionMy thoughts: It seems good go me if we tried to distribute things as mentioned in the form, we might have achieved a great milestone. Keys Questions thoughts:
> I think how the problem broken down in features and separate features and manageable services it wont be any challenge anymore.
> As the developer myself it would be quite easier to work in isolation on each issue in separate service and not stacking up each and every features in BLT core
> So far the looking at each component and feature, it seems good, but if i was asked to split out less priority one. it would be educational section as we dont have much resources and regular article or video resources. We can focus on it but as we have a whole open internet with some useful resources like [freecodecamp](https://www.freecodecamp.org/), [medium](https://medium.com/) and more. If we plan adding resources it should be more specific or OWASP / OWAPS-BLT. (more brief on project section). 2. Vulnerability DiscoveryA decent yet complex but super useful feature, if build. Pointer thoughts:
> It would be great if it could be hybrid as both comes with some merits and demerits ?
I think yes, but with the constraint, it should not be spamming maintainer inbox with frequent updates, it should be quite generous email sent one-time for each vulnerability with high priority, so it should not get ignored.
🔴 need more context in order to approach this
> definitely the open source maintainer, but also closed source projects who lack in management of unpatched CVEs.
> Simple by making the scope narrow, opt-in and focused. 3. Internet-Wide Bug DiscoveryA good start point for early contributors. Discussion points (thoughts):
> It becomes distraction if not managed correctly by the maintainer, as the UI bugs are more easy pickup there is a huge changes of spam contribution, it order to maintain the principle every contributor should follow code of conduct and contribution guidelines. If managed correctly and efficiently definetly it would be a complementary strength.
> It depends on use case, we might add
> simply by adding labels and filters. 4. Incentivizing Fixes
>both. incentives can help people start fixing issues, but they become harmful if they push people to chase rewards instead of quality. (we can use bacon reward system, and similar to GSoC warm, up, the bacon reward should be tracked between the contribution phase, and top contributor would be appreciated with prizes / bounties). note: only possible when maintained by human maintainer, otherwise it may lead to spam
> incentives tied to accepted, reviewed, and merged fixes not just activity. rewards should come after impact, not after effort.
>only solution human review 5. Education & Contributor GrowthPointers:
>The better way would be having education outside of the core project, and not just support resources related to BLT but entire OWASP community
> Basically beginner to intermediate, on BLT, general open source, resources link for tech I think more about open source contribution, making their first contribution, setup of each project, as most beginner fails in it, maybe publishing article based on “How to become an ideal contributors” “Things to avoid while contribution” “Maintain respect between contributors” “Best practices of good contributors” “What make a best pr” A short article consists with example will definitely help early contributors. (As I faced these issues in my early contribution phase)
> As mentioned in a gamified fashion as course / article completion
6. Knowledge Sharing & Community Impact
> Share learnings, patterns, and mistakes not raw vulnerability details. I think BLT should focus on what we learned while fixing issues, not the exact exploit or sensitive data. Things like common mistakes, recurring patterns, or why a fix worked are safe and useful.
> short focused content that is easy to consume and easy to maintain.(mentioned in detail in above section)
> The community, with maintainer guidance and approvement NOTE: I tried to consised the answer in best and short way possible, I might have misunderstood any feature, would appreciate your comment on it. (maybe contain any grammatical / spelling error please ignore). Would appreciate to dive deeper into any above section, if I was asked to choose I would be happy taking over the entire Vulnerability Discovery and management. |
Beta Was this translation helpful? Give feedback.
-
|
I feel the strongest direction for BLT is to treat vulnerability discovery and bug reporting as the core product, with education and contributor growth built around it, rather than as parallel or loosely related features.
The vulnerability and bug logging flow should remain the primary focus. Strengthening how issues are surfaced, triaged, and followed through to remediation adds more long-term value than continuing to expand sideways into unrelated functionality. Learning Through Helping OthersA major opportunity lies in enabling people who can’t afford enterprise-level audits to still receive meaningful security guidance.
Over time, this could resemble a security-focused discussion layer, similar in spirit to StackOverflow, but centered on remediation and responsible disclosure rather than abstract Q&A. Keeping Scope Clean and FocusedAs BLT evolves, it may help to actively separate or remove non-core parts of the current repo such as jobs, adventure, banned apps or etc etc that don’t directly support:
Reducing this surface area would simplify onboarding, clarify BLT’s identity, and make it easier for new users to understand what the platform is primarily for. Automation With Human JudgmentAutomation and AI can help surface potential issues, but fixes should remain human-driven.
This keeps trust with maintainers intact and avoids intrusive behavior. Beginner-Friendly Entry PointsNot all issues need to be incentivized. Opening BLT Beyond a Single AudienceBLT already attracts many contributors from GSoC, which is great. PS: Sorry if any of these points overlap with others; I read several comments earlier, so some ideas may have stayed in my head while writing this. |
Beta Was this translation helpful? Give feedback.
-
|
Hey @DonnieBLT , Thanks for sharing this early-stage vision — I really appreciate that this is being opened up for discussion before anything is finalized. Opinion: Project Registration, Trust, and Issue SeverityFrom my perspective, maintainer consent is critical for the success of this platform. If open-source organizations or projects are registered by anyone without maintainer involvement, it can easily lead to frustration. Even well-intentioned security reports may feel intrusive or overwhelming, especially if a large number of issues are raised without context or prioritization. On the other hand, when maintainers register their own organization or project, they are opting into security checks and remediation support. In that case, suggested issues and fixes are far more likely to be welcomed and acted upon. This opt-in model builds trust and encourages real collaboration rather than resistance. I believe the platform should strongly prefer:
Additionally, when security issues are raised, they must be clearly labeled by severity level (for example: low, medium, high, critical). Proper severity labeling helps maintainers:
Clear severity labels also improve communication and make security discussions more constructive and actionable for everyone involved. Opinion on Vulnerability Discovery, Notification, and Fixing Approach1. Automated Scanning & Issue Acknowledgement In my opinion, once an automated tool completes a scan and identifies potential security issues, those findings should be:
At the same time, maintainers should always retain full control. 2. Fixes Should Be Human-Reviewed (Not Fully Automated) I strongly believe that all fixes should be human-reviewed. Automated tools are useful for detection, but:
This approach builds trust and avoids introducing risky or incorrect changes. 3. Notification Method (Email) Email seems to be the clearest and most appropriate way to notify maintainers after scanning:
Emails should ideally summarize:
4. Severity Labeling for Better Communication Every raised security issue should be clearly labeled with a severity level (for example: low, medium, high, critical).
Clear severity labeling improves trust and makes security discussions more actionable. Overall, I think automated scanning combined with maintainer acknowledgement, human-reviewed fixes, clear severity labels, and respectful email notifications is a balanced and responsible way to help open-source projects improve their security. Opinion on UI/UX Issues, Incentives, Education, and Knowledge SharingUI/UX Bug Discovery as an Entry Point (Not a Distraction) I don’t see UI/UX bug discovery as a distraction from security.
In practice, organizations also receive very valuable UI/UX suggestions. These should not be ignored. If the platform focuses only on security issues, it may become difficult for new contributors to enter and grow within the community. Incentives and Rewards for Fixing Critical Security Issues I believe rewarding contributors for fixing critical security issues is a positive idea when done carefully. With:
contributors will stay focused on quality rather than quantity. A healthy flow could be:
This ensures:
At the same time, it’s important to note that not all bugs or issues need to be rewarded. Education & Contributor Growth (Right Time, Right Way) Once the platform’s core functionality is clear and the community has active organizations to work with, education becomes very important. At that stage, it would be valuable to teach new contributors:
This structured learning can help contributors smoothly enter the ecosystem and grow over time. Knowledge Sharing Without Causing Harm Knowledge sharing should focus on:
It should not disclose sensitive project-specific security details. A good approach could be sharing:
This keeps the community informed and motivated while protecting projects. Apologies for the length of this comment — these are my genuine thoughts and concerns that I wanted to share clearly. I’d really appreciate your feedback on whether I’m thinking in the right direction here, or if I’m getting distracted from the core goals. Thanks a lot! |
Beta Was this translation helpful? Give feedback.
-
|
Most of the points have been addressed at this point, so here are my two cents. High-Level Vision I think the vision is pretty good. The problem is definitely worth solving in this way because many open-source projects struggle with keeping up with security vulnerabilities due to limited resources and contributors. A service focused on identifying and helping fix unpatched vulnerabilities fills a real gap in the ecosystem. The people who benefit most from this approach are maintainers, since they are the ones who will receive direct help in identifying and fixing vulnerabilities in their projects. Maintainers of smaller open-source projects especially often lack the time, expertise, or contributor support to stay on top of security issues. This initiative could take some of that burden off their shoulders. Vulnerability Discovery Automated fixes are appealing since manually reviewing all vulnerabilities would need a lot of human effort. However, fully automated solutions risk introducing errors. There are tools which already discover vulnerabilities like Dependabot which automatically checks for outdated or vulnerable dependencies and opens PRs to update them or OWASP Dependency-Check scans projects to identify known vulnerable components. To differentiate, we could focus on broader CVE detection beyond just dependencies like provide more context around fixes. We have to think about other ways in which we could differ. For email notifications, messages should clearly explain why the vulnerability is hazardous to encourage maintainers to take action. If maintainers do not understand the risk, they are more likely to ignore the email. Internet-Wide Bug Discovery I think focusing on UI/UX bugs feels out of scope and would be a distraction from the core mission. Incentivizing Fixes Incentivizing fixes is a good idea, especially for attracting contributors yet we cannot risk encouraging low-quality PRs submitted just for rewards. At the end, it is a demand supply problem: if there are more contributors and few issues, normal BACON and badge style rewards would be a good idea but if there are less contributors but many fixes, incentivizing more or amping up rewards like special one time badges or more BACON points might help. It is also a question of reputation as badges which provide value in user's careers might might incentivize them more. As to low quality fixes, we can use AI to analyse the quality of a fix before rewarding and award higher quality fixes. Badge and BACON-style incentives align well with open-source values and OWASP's mission, but may not be motivating enough for all contributors, which might lead to fewer people participating. Before revamping the model, I think we can survey the contributors to understand what non-monetary rewards they would actually want, then design the system based on that. This way, the incentive model is built on real feedback rather than assumptions. Education & Contributor Growth A separate repository for education is a good idea. While not core to identifying and fixing bugs, it provides a lot of value for growing the contributor base and improving the overall security awareness in the community. Knowledge Sharing & Community Impact The right balance between transparency and safety might be sharing fixes for common vulnerabilities while keeping less common or specific ones hidden until they are fixed. This way, the community can learn from patterns and common mistakes without exposing sensitive information that could be exploited. Learnings can be shared through monthly or annual reports and dashboards. However, as others have pointed out, this needs heavy maintenance. Human-written reports build trust though, and having security professionals author or review these reports adds credibility. Regarding ownership: knowledge maintenance could rotate among experienced security contributors on a yearly or monthly basis, based on availability. This prevents burnout and ensures fresh perspectives while keeping the knowledge base active and up to date. I’d be happy to dive deeper into any of the sections above and expand on them, especially the incentives part, as I’m already working on the team badges feature and can evolve it based on the new direction. |
Beta Was this translation helpful? Give feedback.
-
|
First, let me say thank you to everyone who has contributed! All of your input is really thoughtful and it means a lot that you've spent time reading and responding. Vulnerability Detection and Scanning:I want to be clear the vision is to have some scanning or vulnerability detection tooling that is outward facing. In other words - how might we support other OSS projects in detecting and patching their vulnerabilities? Buttercup:I wanted to provide a bit more context on Buttercup: Buttercup is a cyber reasoning tool that was developed as part of the AI Cyber Challenge hosted by DARPA (part of the US Gov). The company that created it won second place in the overall competition and decided to keep it open source and continue development with the community. Currently it runs on a k8s cluster you configure, and uses fuzzing and LLMs to scan a repo. It then identifies the issues (CWEs) and can run a patcher service to generate code fixes. It currently supports Java and C projects that are OSS-Fuzz compatible, and projects that build succesfully and have existing fuzzing harnesses. Areas of Investigation for Buttercup
UI/UX:What I read above is there is a general consensus that reporting of UI/UX bugs should be deprioritized, and should not factor much, if at all, into the BACON rewards. Overall Project Vision:I agree with @Nachiket-Roy
Vulnerability fixesI am seeing several comments that echo my own concern about how to provide the vulnerability information with enough detail to potential contributors (especially beginners) so they can make the fixes, without violating ethical disclosure. To me this is the biggest issue we need to solve. Incentives and rewardsI agree we need to make BACON credible for it to be any kind of incentive. I also agree that we should only award PRs that actually fix something. My suggestion was to cross reference a users' github username with the commit URLs published in the NVD in order to award them BACON. The NVD record also shows the CVSS 4.0 severity rating which could be used to map to our weighted scale. An example page is: https://nvd.nist.gov/vuln/detail/CVE-2026-24130 - scroll down to "References to Advisories, Solutions, and Tools" to see the commit URL in the table. This would ensure BACON would only be awarded once the fix and CVE are published. Project CleanupI agree with several folks who mentioned separating or removing non-core parts of the current repo. I am curious how much impact removing things like jobs, adventure, or banned apps would have on the main website? Are they heavily integrated, or is removing/separating them trivial? EducationThere have been some great ideas on how to build out an education program. It is definitely a lot of work and we should consider whether we want to host and run a CMS ourselves, host and run an LMS ourselves, and what resources we have to create labs and training materials. I love the idea of using real code problems to create learning tools, but I also know it is a LOT of work to develop and maintain good learning guides. In a previous work project my team created 120 wiki articles covering common scan findings and remediation steps, but our customers had a very tight scope of Java Spring Boot back ends with React front ends and we were able to speak very specifically to their tech stacks. I'm wondering if utilizing existing OWASP or other 3rd party learning might be a good starting point before committing to maintaining our own resource libraries? |
Beta Was this translation helpful? Give feedback.
-
|
Most of the points have already been addressed well by others, so I’ll just share a quick gist of my overall view.
That said, this is also the feature we need to be most careful about. I’m currently working on a Zero-Trust pipeline that keeps everything encrypted end-to-end and avoids storing sensitive data publicly or on the web. Even with that in place, we should treat this as high-risk to get wrong—so I think we need thorough testing, clear guardrails, and a gradual rollout before introducing it broadly to the community.
|
Beta Was this translation helpful? Give feedback.
-
|
My some opinions and suggestions as Security Practitioner for BLT
I believe this problem is worth solving in this way, and more importantly, I think it can help BLT regain a clear and recognizable core identity. At the moment, BLT offers many features, but for a new user it is not always obvious what BLT fundamentally stands for. Refocusing the core around security, remediation, and learning gives BLT a much clearer narrative:
This direction feels aligned with BLT’s mission, OWASP values, and the long-term health of the FOSS ecosystem. Who Benefits the Most The primary beneficiaries, in my view, are: Small open-source projects Early-stage founders Projects with limited or no security budget These teams often lack: Dedicated AppSec engineers Proper vulnerability monitoring Time to track dependency risks and disclosures For them, early alerts plus remediation guidance can meaningfully reduce risk without requiring enterprise-level tooling or cost.
At a high level, I think automation and AI-assisted scanning are necessary, because manual vulnerability discovery does not scale across many repositories. Using scanners and AI to: Detect dependency risks Identify known CVEs Surface potential security issues makes sense as a first-step signal generator. On Automated PRs Automatically submitting security PRs feels high-risk: It can break projects It may violate contribution or disclosure guidelines It could be misused or spammed Security fixes require careful human judgment A safer and more responsible flow, in my opinion, would be:
This keeps automation focused on detection and triage, while remediation remains human-led and accountable. Notifications Email still feels like the most accepted and expected channel for responsible disclosure. GitHub Security Advisories Opt-in dashboards for maintainers could improve flexibility without breaking trust.
In the context of a security-focused BLT, general UI/UX bug discovery feels more like noise than a core pillar. UI bugs: Are easier to find Usually don’t require strong incentives Are already reported naturally by users However, frontend issues can become security issues (e.g., vulnerable frontend dependencies, insecure auth flows, misused components). So instead of general UI/UX bugs, I think a better fit would be: Frontend-related security risks Dependency vulnerabilities in UI frameworks Security-relevant logic flaws in frontend code This keeps the scope aligned with security rather than visual or usability feedback.
I agree that incentives help motivation, but I think credibility must come before scale. If incentives (BACON, badges, rewards) are not trusted or meaningful: Skilled contributors may ignore them Low-quality PRs may increase A more structured incentive model could depend on: Severity of the issue Trust level and history of the contributor For example: Beginners receive learning-oriented badges and progression markers Trusted contributors earn higher-value recognition for meaningful fixes This avoids gaming while still encouraging growth.
Education is critical to this vision, but I believe it should remain separate from the core product (e.g., a dedicated repo or platform), so BLT itself does not become too heavy. Before expanding education, BLT should clearly define: What level of security knowledge is expected What contributors should understand in practice Which vulnerabilities are “core knowledge” for BLT users Education should be hands-on, not theory-heavy: Vulnerable code examples Small code snippets to analyze Realistic scenarios instead of MCQs Explanations tied to real fixes Ideally, BLT itself becomes the learning source: Patterns found during scanning (without exposing sensitive details) Common mistakes turned into learning modules Real-world findings driving education content This keeps education tightly connected to BLT’s actual work, not generic cybersecurity material.
For surfacing patterns and trends, I think dashboards and reports are more effective than blogs. Dashboards: vulnerability trends, common categories, remediation outcomes Blogs: contributor stories, learning guides, platform updates Dashboards feel: More actionable Safer for sensitive data More aligned with maintainers’ needs Closing Perspective All of the above reflects my own opinions, shaped by: Reviewing BLT’s current feature set Identifying gaps and overlaps Considering BLT’s stated mission and long-term goals My intent is not to expand BLT indiscriminately, but to help it move toward a clearer, more defensible core that balances: Security responsibility Contributor growth Community trust Long-term sustainability |
Beta Was this translation helpful? Give feedback.
-
|
Closing this and we can discuss with the individual idea posts |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Discussion: Core Feature Ideas & Direction (Open for Feedback)
This document summarizes an early concept for refining BLT. Nothing here is final — all ideas, assumptions, and directions are open for discussion, critique, and improvement.
1. High-Level Vision
Key questions:
2. Vulnerability Discovery (Concept Stage)
Use Buttercup or similar scanning methodologies to detect unpatched CVEs
Automated or semi-automated email alerts for scan results
Notifications sent to appropriate project contacts (maintainers, owners, or designated security contacts)
Potentially submit pull requests with fixes
Explicitly not a tool for security researchers
Positioned as a separate service focused on:
Discussion points:
3. Internet-Wide Bug Discovery (Exploratory)
Discussion points:
4. Incentivizing Fixes (Early Idea)
Incentivize vulnerability patching through:
Discussion points:
5. Education & Contributor Growth
blt-university / blt-education
Goal: grow contributors who can confidently identify and fix issues
Discussion points:
6. Knowledge Sharing & Community Impact
Focus on surfacing:
Avoid publishing raw vulnerability details
Strong emphasis on:
Discussion points:
7. Open Questions
🔹 Original Concept (Unedited Reference from @kittenbytes)
Core Features
Three main pillars
Vulnerability Discovery
Utilize Buttercup or other scanning methodologies to find unpatched CVEs
Email service to alert of scanning
Report unpatched CVEs via email
Provide fix as a PR to project?
Not a tool for security researchers
UI/UX Bug Discovery & Reporting
Incentivize Vulnerability Patching
Education & Contributor Growth
blt-university / blt-education
Knowledge Sharing & Community Impact
💬 All feedback welcome — technical, philosophical, critical, or exploratory.
This is intentionally early-stage and meant to evolve through discussion.
Beta Was this translation helpful? Give feedback.
All reactions