FRDCSA | internal codebases | Learner

[Project image]

Architecture Diagram: GIF

Jump to: Project Description | Parent Description | Capabilities

Project Description

The point of learner is so that learning some function requires only using the learner library. Each function to be learned is given a name, (generally an extension of the current perl module) so that the program need not specify storage files, etc. The type of learner to be used is passed as an argument, and then calls to the specific type of learning functions may be used. As a bonus, learner attempts to automatically determine which learner is to be used. Maybe should also perform memoization using freekbs.


  • with the package data we have, we could train a deep learner to identify the locations and text files wherein the readme is most often found.
  • The API learner learns what in Object-Oriented-Programming-in-Common-Lisp is called a protocol.
  • Use kbfs, have it learn when to automatically attribute facts to files based on certain correlations. Also use Sayer and Sayer-learner, and Thinker to learn when these facts apply to the files. Then, for instance, do automatic classification of text files into subject headings. Ultimately organize all of the research papers and documents I have into a coherent, cohesive whole.
  • There was a paper on exploiting information using MDPs or something for attacking systems, and I imagine that same technique could be used for the Sayer/Thinker/learner/Suppositional-Decomposer systems in order to optimize the exploration of the "hypothesis space"
  • Write something that classifies unilang entries by taking all the addressed messages, stripping them of their addressed names, and then running a feature learner over the text.
  • Figure out what is wrong with autoclassification. Add more features. Get more input. Perhaps use Weka with the output to generate some kind of learner.
  • We should develop a system for developing systems. In other words, boss should have high-level design criteria in mind. In other words, let us have a better defined approach to building systems. To build the TDT, we should (after first searching for other systems) collect data, choose a learner, implement the learner, etc.
  • obviously the learner to radar must take into account the arguments as well as the search term for learning
  • Hook up learner and quac to create a question question-asker/question-answerer feedback loop
  • I'm a good learner.
  • Maybe use Snow as part of perllib learner.
  • Things that I would like to work on: Irish TTS, Dictionary mapper, Thinker, Language learner, etc.
  • Write my all language learner script, for espanol.
  • learner can have a basic system for flagging when something demonstrates it is non-deterministic (by simply observing which functions have output that is different on the same input)
    ("unilang-message-classify" "55600" "icodebase-capability-request")
  • Sayer: that's what learner does, it memoizes function calls. That's what we need to train the input information.
  • Sayer: the problem is similar to the unilang classification problem. of course I knew this. however, for instance, like the problem of multiple dispatch, could be solved by training a learner. like the problem of longest token for the perl 6 parser. all related. The problem of figuring out what function to call based on the type of inputs.
  • Maybe I should set quac up to record all answers it makes. Use learner?
  • Necessary to make estimations of project life in order to determine whether to use the software. Set up featured learners to predict this. Call this the learner.
  • Some Screen Shots (Graphical representation of some aspects of the learner profile generated by Protege Ontowiz Plugin)
  • Look into automatic spelling correction. Also, wow!!!!! You can use unilang to learn phrases, so you simply need to incorporate a phrase learner.

This page is part of the FWeb package.
Last updated Sat Oct 26 16:53:41 EDT 2019 .