Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
Is your AI project a nonstarter?
Here’s a reality check(list) to help you avoid the pain of learning the hard way
If you’re about to dive into a machine learning or AI project, here’s a checklist for you to cover before you dive into algorithms, data, and engineering. Think of it as your friendly consultant-in-a-box.
Don’t waste your time on AI for AI’s sake. Be motivated by what it will do for you, not by how sci-fi it sounds.
This is a super-short version of my 18 minute monster Ultimate Guide to Starting AI. If you’re about to embark on ML/AI, here’s hoping you can answer “yes” to all of these questions.
If you answer “no” to any of the checklist questions, this might be a portrait of your project.Step 1 of ML/AI in 22 parts: Outputs, objectives, and feasibility
- Correct delegation: Does the person running your project and completing this checklist really understand your business? Delegate decision-making to the business-savvy person, not the garden-variety algorithms nerd.
- Output-focused ideation: Can you explain what your system’s outputs will be and why they’re worth having? Focus first on what you’re making, not how you’re making it; don’t confuse the end with the means.
- Source of inspiration: Have you at least considered data-mining as an approach for getting inspired about potential use cases? Though not mandatory, it can help you find a good direction.
- Appropriate task for ML/AI: Are you automating many decisions/labels? Where you can’t just look the answer up perfectly each time? Answering “no” is a fairly loud sign that ML/AI is not for you.
- UX perspective: Can you articulate who your intended users are? How will they use your outputs? You’ll suffer from shoddy design if you’re not thinking about your users early.
- Ethical development: Have you thought about all the humans your creation might impact? This is especially important for all technologies with the potential to scale rapidly.
- Reasonable expectations: Do you understand that your system might be excellent, but it will not be flawless? Can you live with the occasional mistake? Have you thought about what this means from an ethics standpoint?
- Possible in production: Regardless of where those decisions/labels come from, will you be able to serve them in production? Can you muster the engineering resources to do it at the scale you’re anticipating?
- Data to learn from: Do potentially useful inputs exist? Can you gain access to them? (It’s okay if the data don’t exist yet as long as you have a plan to get them soon.)
- Enough examples: Have you asked a statistician or machine learning engineer whether the amount of data you have is enough to learn from? Enough isn’t measured in bytes, so grab a coffee with someone whose intuition is well-trained and run it by them.
- Computers: Do you have access to enough processing power to handle your dataset size? (Cloud technologies make this is an automatic yes for anyone who’s open to considering using them.)
- Team: Are you confident you can assemble a team with the necessary skills?
- Ground truth: Unless you’re after unsupervised learning, do you have access to outputs? If not, can you pay humans to make them for you by performing the task over and over?
- Logging sanity: It’s possible to tell which input goes with which output, right?
- Logging quality: Do you trust that the dataset actually is what its purveyors claim it is? (To learn from examples, you need good examples to learn from.)
- Indifference curves: Since your system will make mistakes, have you considered how much worse one type of mistake is relative to another?
- Simulation: Have you considered working with an expert in simulation to help you visualize what you’re asking for? Not mandatory, but useful.
- Metric creation: Have you stitched the scoring of individual outputs into a metric for the business performance of your system over many instances?
- Metric review: Has your business performance metric been reviewed to ensure that it’s not possible to get a good score on it in some perverse and harmful way?
- Metric-loss comparison: (Optional.) Does your business performance metric correlate well with a standard loss function? If not, what you’re asking for might be very difficult.
- Population: Have you thought carefully about which instances you need your system to work for? The statistical population of interest defines which broad collection of instances your system’s performance tests will cover.
- Minimum performance: Have you defined a strict minimum performance criterion for testing and committed to crushing your system if it doesn’t make this bar?
Once you’ve answered “yes” to all that, you’re ready to move to the next step of ML/AI, which involves data and hardware (and engineers, yay!). I’ll be putting out a guide on that soon.
If that was too short of a short summary, the full guide to starting an AI project is here. Enjoy!
AI Reality Checklist was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.