For those interested, I created a gem in Ruby for comparing the similarity of images by computing the dhash of an image and then calculating the Hamming distance between the hashes: https://github.com/rohanpatel2602/ruby-dhash
To clarify, while we intend to use AI to solve a variety of different problems, we're not using it for actual code synthesis (ie. building apps). Instead we are leveraging code reusability and programmatic stitching/merging for our software assembly line.
In addition to that, we are leveraging various AI/ML techniques throughout the rest of the product development lifecycle, for areas such as pricing/specing/ideation, infrastructure management/scalability, code reusability itself and matching, creator (developer/QA/design) resource matching, sequencing and dependency prioritization, and more.
Yeah, none of that sounds like AI. It sounds like standard features of IDEs and PaaS. I can't imagine you have a programmatic way to save much time on pricing/specinf/ideation because machines can't do that yet.
Also, the clear message of the company was "AI writing code that would otherwise be written by humans".
Again, would strongly suggest you stop posting anything about this situation without consulting a lawyer. Based on your HN posts, you can't claim ignorance anymore.
You're more or less correct in your understanding, we're trying to get to 80%, and are certainly not there yet.
The other thing to keep in mind is that there's a lot more than goes into developing software than just code.
We've already been able to automate problems such as selecting the optimal creators (developers, QAs, designers) for a given project from our capacity network, the ability to price out and estimate timelines for a project (something that takes our competitors 2-3 weeks), onboard and evaluate engineers on our platform, predict, arbitrage and scale cloud infrastructure for our client's projects, along with a bunch of other areas.
There's definitely a long way for us to go, however we have been able to show proven success on these problem areas already.
but how is the accuracy of your quicker timeline estimates? being faster isnt always the only important factor.
however i disagree with your third paragraph, you have not solved any problems in the problem area you are raising money for, which is AI created software. best i can tell, based on your commentation you are just streamlining onboarding, hiring and estimating. No offense, but none of this has anything to do with AI, IMO. Just a glorified project management and outsourcing company at this point.
This post doesn't clear much up. The things you describe that are done by AI sound like project bootstrappers, libraries, or code-gen (in an IDE). None of those require "AI".
I just ran a tool that bootstrapped most of a CRUD app for me. Was it AI? No, because the program I ran didn't do any app-specific coding.
My honest advice is to talk to a lawyer and get this company off your resume ASAP.
AI is certainly not magic, and as an industry we're super far away from what would be considered real AI in the technical sense.
That being said, AI has become a catch all term for everything as simple as linear regressions, all the way through to neural networks.
We don't claim to be able to write apps using AI, we're a platform that is trying to use AI and general automation in order to optimize the traditional SDLC. Actual code generation/synthesis is years away in my opinion and there is far more impact that can be had by going after other manual aspects of software development.
> we're a platform that is trying to use AI and general automation in order to optimize the traditional SDLC
I don't think you can get away with corp-speak/buzzwords here this easily. Could you elaborate on how exactly you're using AI to "optimize" software development?
To me this sounds like their product is a pile of boilerplate / templates that can be combined to build an app.
If I were take a guess of the flow: As a customer creates a "new app" they go through some wizard type of process that will start to narrow down which templates are needed and what information to prompt the customer to fill in.
Once they have all of that they take that bundle of templates and "content" and hand it off to some developer to glue it all together and then perhaps add some other automation to handle small changes by the customer later automatically.
Could be a clever way to speed up app development if you can narrow the scope down but "AI" it is not.
You are exactly correct. I saw them do a demo at the Collision conference in Toronto and the process involved going through a lengthy setup wizard about the project and its characteristics.
Glad you were able to stop by at Collision!
The process today is certainly not as user friendly as we'd like and can be quite time consuming. We're doing a revamp of the particular experience you saw in Toronto to streamline the process and also the ability to create clickable prototypes automatically!
Though we're still developing that tool, we intend to unveil it at WebSummit this year. Hope to see you there and get your feedback on it!
Happy to elaborate - in a nutshell what we're trying to do is automate as many parts of the traditional software development lifecycle as we can, and for whatever cannot be automated, put in place the right tooling to allow for repeatable results.
Our thesis is that most applications today have a huge amount of duplication at a code level, and process level. We're trying to use reusable building blocks (well structured libraries, templated user stories, wireframes, common errors, etc.), in order to immediately solve that duplication. That being said, we're not talking about automatic code generation, it's more about being able to assemble these reusable building blocks together at the beginning of a project so you have a better starting point. There will always be customization required for any project however, and that is a human led process.
Apart from actual development, we're also trying to automate processes around project management, infrastructure management, and QA.
For example, what we've already been able to do is automatically price and create timeline estimates for a project without any human involvement, determine which creators on our network are best suited for a given project, evaluate and onboard developers on to the network, setup developer environments, and a lot more!
Sorry if I'm missing something obvious but it's not very clear to me how the first part significantly benefits from AI. Code re-use is just good software engineer practice, are you somehow able to figure out what libraries to use automatically? Isn't this trivial to perform by a human anyway?
The latter part, as far as figuring out what work to assign and estimating time-frames does seem like a legitimate AI use case though.
We're attempting to tackle the problem holistically. That means that we're tackling every single step of the traditional product development process. All the way from how you ideate, price, and spec, to sourcing and managing developers through to QA and infrastructure management.
For example, today, our ideation/pricing/spec tools leverage applied ML, creator management leverages facial recognition for fraud prevention, and infrastructure management uses statistical modelling.
We're trying to make code re-use a repeatable and predictable process rather than just a best practice. Today in the industry it's a purely led by developers, and very often is done solely at their discretion in a manual fashion. We're attempting to platform enforce code reuse, across autonomous distributed teams and products. Apart from just deciding what the optimal building blocks for a project are, the actual assembly or intelligent merging of these building blocks in an automated way is non trivial and mirrors modern automative assembly lines.
Can you provide specific examples and validation? I too can write a program that "creates timeline estimates for a project without any human involvement" - doesn't mean that its estimate is accurate. Can you provide specific examples of how you are automating traditional SDLC using AI?
You are here on a forum of technical people, can you be appropriately technical?
All of our project timelines are generated fully automatically. Today we are hovering at around a 90% accuracy on those estimates, and are moving more and more towards solving that last 10%.
We put our money where are mouth is - for example if our system generates a spec with a timeline of 10 weeks and a price of of 10K, and we take 15 weeks, we do not charge more than 10K.
Unfortunately I can't reveal more details of how we generate those timelines automatically apart from the fact that is uses NLP, CNNs, and regression analysis as it is proprietary and core to our business.
Lots of companies don't charge for work that falls outside the estimated amount of time, you guys are far from the only ones doing that. It doesn't take AI to do that. And anyone would find your description of the methodology vague to the point of being useless.
> NLP, CNNs, and regression analysis
No one is asking you to reveal your algorithms in detail, but any information at all besides just naming 3 statistical methods would go a long way in convincing people of the validity of your assertions.
Maybe you're just using human estimators and are using NLP/CNN/regression analysis to compute their daily coffee supply.
Apologies if it came across as vague. You're welcome to try out our pricing and timeline estimation system if you'd like to get a sense for how it works - it's all public (https://builder.engineer.ai).
That particular tool uses historical data from our user story management system and repository system to glean insights such as average amount of time taken on customizing features, complexity of features and the interactions between them, common errors, developer efficiency by feature grouping, etc. This is all then used as input data into our pricing and timeline estimation system.
Collecting this data was no small feat, we had to build a significant amount of project management and developer tooling in order to get the granularity of data required.
This is also why we're confident that we'll be able to improve our accuracy beyond 90% - as we build more projects, the data collected from that process will feed back into these models.
The problem I've always seen with structuring code, is that text files are basically one dimensional, meaning that you always run into conflicts between putting things close to each other that are similar in one dimension vs. those that are similar in another dimension.
I'm not lucid enough tonight to give good concrete examples, or be more specific about how this relates to repetition, but I feel like there are deep problems with designing software automatically even looking at mundane, small scale stuff.
Which btw is absolutely an artificial intelligence application albeit one that has nothing to do with neural nets and deep learning.
Perhaps, if your company has trouble with acquiring data and training large deep neural nets, you could benefit from looking at other techniques that do not have such stringent requirements and that are much better suited to smaller companies (i.e. anyone but Google, Facebook, Amazon, Netflix et al).
I'm very curious how are you using AI to optimize software engineering. Do you have a linter that catches bugs? Do you have a system that generates code from a very high level language? I'm really curious what is the actual part of software development you guys managed to automate. Saying "we use AI and automation to optimize software development" doesn't make any sense if you can't explain exactly which problem you guys managed to solve. When you go to an aerospace engineer you don't say "Hey! I managed to optimize airplane building". You say "Hey! I made this software that given your jet engine, optimizes it for cost/safety/efficiency!"
It's not just one problem we're tackling, it's actually more like 40 small issues that we're working on.
You actually named a few right there - static code analysis, automatic UI generation from YAML. I also want to be clear that not all of it is AI or ML. For example how we price and spec out ideas (https://builder.engineer.ai/) is fully automated and leverages NLP and NNs, and how we handle developer verification uses facial recognition. However many of the problems we are going after don't require AI; heuristics based approaches and statistical models can actually have better results in many cases.
We've been very transparent with our investors on where we are in the process of creating this platform both pre-investment and post. They actually responded to the WSJ article:
A spokeswoman for Deepcore said it has complete confidence in Mr. Duggal’s vision and team.
A spokesman for Jungle Ventures said it is a proud investor in Engineer.ai and its technology, adding that “the AI landscape is a varied spectrum.”
A Lakestar spokeswoman said it also has confidence in Engineer.ai and its team, adding that “growth in the AI space does not happen overnight.” It said Engineer.ai had been very careful in presenting its technology to Lakestar and other investors
All due respect but I wouldn't expect a VC to say anything else. These guys want to be able to peddle their stake onto someone else in a future round. I'll be more interested in what those investors do in your next funding round.
Apologies for that experience!
We're currently working on a new iteration of that site and will be launching it very shortly. Come back soon and let us know what you think.
Ruby on Rails out of the box gets a lot of bad rep for being slow and non-performant. Has anyone replaced MRI with JRuby or TruffleRuby to mitigate that? Were there any significant "gotchas" with integrating it?
I did a POC getting a monorail running on JRuby (probably 6?) years ago. It wasn't for performance reasons (though we did have some GC issues, which JRuby might have helped). It was because we had to integrate with some services which only vended APIs via JARs.
The biggest dealbreaker was that (like most Rails apps) we had a ton of gems, some of which had native extensions. JRuby doesn't support C native extensions.
In the end, it was way easier (and a much less risky change) to spin up our own Java REST APIs in front of those JARs, and let our MRI Rails app integrate with them by talking to those REST endpoints.
Just for fun, I also tried using dRuby so an MRI Rails app and a small JRuby (non-Rails) process could communicate idiomatically. It was cool, but I'm pretty sure dRuby was eventually deprecated.
Been following TruffleRuby, I did recently just spin up a super dummy app where I swapped out mysql to postgresql, made a migration to create a table and generated some fake data. Opened a console and ActiveRecord was able to query. Thats a big step forward as they had said the ActiveRecord connections were some of the biggest hurdles.
Does this story answer your question about performance? Not really, but if I wanted to dive deeper (which I do at some point!) I think TruffleRuby is a real option.
ActionCable is super easy to use out of the box to get WebSockets running, but it's incredibly non-performant. Anything more than 100 concurrent users took the time taken to a send a message to the socket from milliseconds to seconds. Replacing it with the AnyCable gem in conjunction with AnyCable-Go got us to over 1000 concurrent users without a hitch.
I'd say it's more use Rails for all of your standard CRUD operations and then use Go as a module in Rails using Quartz/FFI if you have any algorithms that need to be high performance. Of course you could always go down the microservice route and spin up a Python/Go service for your more intensive data processing modules.
We had huge performance issues with ActionCable that basically made it unusable for more than 100 concurrent users. We considered EventMachine however ended up going with AnyCable in conjunction with AnyCable-Go which got us up to 1000 concurrency users without any performance impact.