FURSAN: A Decentralized AI-Powered News Verification Platform

FURSAN: A Decentralized AI-Powered News Verification Platform

Introduction

Greetings Polkadot Community,

I am Ali Sajadpour, the proposer and developer behind FURSAN, a decentralized platform designed to combat misinformation using a combination of artificial intelligence (AI) and blockchain technology. By leveraging Polkadot’s robust ecosystem, FURSAN aims to address the growing challenge of verifying news authenticity while restoring trust in journalism.

This post serves as an opportunity to share our vision, gather feedback, and refine the proposal to align closely with the community’s expectations. I invite you to provide your insights, questions, and suggestions as we prepare to submit this proposal for Treasury consideration.

The Problem

The spread of misinformation has eroded public trust in media and created widespread societal issues, including political polarization and public confusion. Existing centralized fact-checking systems are:

  1. Opaque: Their processes lack transparency and trust.
  2. Limited in Scalability: They cannot handle the volume of misinformation in real-time.
  3. Exclusionary: They often fail to involve diverse stakeholders like journalists and independent validators.

FURSAN’s Solution

FURSAN offers a community-driven, transparent, and scalable solution to misinformation by:

  1. Utilizing AI models for real-time semantic analysis and fake news detection.
  2. Incentivizing a decentralized community of validators through staking mechanisms and governance.
  3. Deploying the platform on Polkadot’s multi-chain architecture to ensure high throughput, low latency, and interoperability.

Key Deliverables

1. Software

  • A fully functional decentralized application (dApp) for submitting, validating, and engaging with verified news content.
  • Custom Substrate pallets for:
    • News submission and validation.
    • Validator reputation scoring.
    • Staking and reward distribution.
  • AI-powered NLP and sentiment analysis tools for detecting misinformation.

2. Infrastructure

  • Blockchain Layer: Built on Polkadot to ensure scalability, security, and interoperability.
  • Storage Layer: Decentralized storage solutions (e.g., IPFS/Arweave) for data immutability.
  • Frontend: A user-friendly interface for journalists, validators, and the public.

3. Media Content

  • Educational videos and tutorials to onboard users and validators.
  • Campaigns promoting the platform’s benefits to media agencies and the general public.

4. Community Engagement

  • Workshops, hackathons, and webinars to onboard validators, journalists, and developers.
  • A governance DAO to allow stakeholders to vote on platform updates and policies.

Proposed Budget

The requested budget is 8,000 DOT (~$40,000 USD), allocated as follows:

Milestone Deliverables Cost (USD) Cost (DOT)
Milestone 1 Research, stakeholder engagement, initial dev $4,200 840 DOT
Milestone 2 AI integration, pallet development, and dApp dev $26,100 5,220 DOT
Milestone 3 Testing, deployment, and onboarding $7,700 1,540 DOT
Infrastructure Hosting, storage, and maintenance $2,000 400 DOT
Total Budget $40,000 8,000 DOT

Objectives and Metrics for Success

Platform Adoption

  • 500 active users within six months of launch.
  • 50,000 verified news submissions in the first year.

Community Engagement

  • Onboard 30 validators and 20 journalists as active contributors.
  • Establish a governance DAO with 50% participation from stakeholders.

AI and Blockchain Integration

  • 85%+ accuracy in fake news detection using AI models.
  • 1,000+ transactions per second (TPS) supported by Polkadot’s infrastructure.

Technical Architecture

(Attach or embed a diagram illustrating the architecture of the FURSAN platform, highlighting the AI, Blockchain, Storage, and Frontend layers.)

  • AI Layer: Real-time analysis and classification of news submissions using NLP and semantic models.
  • Blockchain Layer: Handles submissions, validations, staking, and rewards using Substrate pallets.
  • Storage Layer: Decentralized and immutable storage of data with IPFS or Arweave.
  • Frontend Layer: User-facing interface (web and mobile) for validators, journalists, and the public.

Why Polkadot?

FURSAN is built for Polkadot due to its unmatched scalability, interoperability, and decentralized governance framework. Polkadot’s ecosystem aligns with FURSAN’s mission of creating a transparent, community-driven platform for combating misinformation.


Request for Feedback

We value your insights and invite feedback on:

  1. Technical feasibility of the proposed architecture.
  2. Budget allocation and milestones.
  3. Validator incentives and governance mechanisms.
  4. Strategies to maximize adoption and engagement within Polkadot’s ecosystem.

Engagement Plan

We aim to provide transparent reporting and regular updates via:

  1. Polkassembly: For formal discussions and updates.
  2. GitHub: Open-source repository for code, documentation, and issue tracking.
  3. Discord: Interactive Q&A sessions and progress updates.

Contact Information

Feel free to reach out with any questions, suggestions, or comments:


Closing Note

FURSAN represents an innovative approach to combating misinformation through decentralized technologies. With Polkadot’s robust infrastructure and the support of this incredible community, we believe FURSAN can become a cornerstone in restoring trust in media.

Thank you for your time and feedback—I look forward to hearing your thoughts!


What jurisdiction is FURSAN based in and what legal obligations and risk are there?

Say you’re in a non-free speech jurisdiction such as the EU or Cuba.
What happens when your independent validators decide something is true, but the EU bureaucrats think it’s disinformation, or perhaps even a criminal act.

  • Do you disclose identities of those validators to the authorities of the EU member country asking for that info? How about to Cuban authorities?
    • If you do, who will want to become a validator?
    • If you do not, I assume FURSAN will be at risk for aiding and abetting
  • The same goes for the rest - AI models, for example. Will use you EU-approved models to grade news from Cuba? Or Cuba-approved AI models. What models will be in use for users from India? European or Indian? The cost of figuring and navigating that mess from legal and dev side will be enormous.

This field is full of legal landmines.

Next, the incentives and demand. X has Community Notes and we may agree or disagree on how well they work, but it’s included and it’s there. X has no incentive to add 3rd party services for this, because they have an AI and they have Community Notes. That leaves you with Tier 2, Tier 3 and Tier 4 centralized social networks who will ask you about legal risks and mitigation, which is the nightmare explained above and decentralized Tier 4 and Tier 5 such as Nostr.
I’ve no idea how Nostr and similar networks look like, but I wonder what is the incentive for their users to deal with “Community Notes”-like content. I assume most don’t want any external moderation and their devs don’t want to spend money on integrating and paying for it so what else but inflation will be there to pay for the work of independent validators?

Then there’s the problem of accessing news to be reviewed. How do you get a validator from France access Rumble to review some video when Rumble is blocked? The same goes for independent validators from other censored countries. You may need to tag validators by jurisdiction and assign them tasks related to media or networks they can and may view… More complexity.

I’m sorry that I can’t think of some positive sides, but this sounds extremely difficult. Just the amount you’ll need to spend on lawyers before you can deploy to anywhere where this matters could be more than the total amount you mentioned.

You could create a completely permissionless network, but you’d need to route all app traffic through a VPN or Tor or a mixnet to remove risks to the team, validators and the users That would work, but again - it’s a security app that would cost a lot to develop and require security expertise. And then you’d probably be harassed by the government “just in case” because they couldn’t necessarily tell what you’re up to (Paul Durov, etc.)