When we embarked on this journey, there were many things we didn’t know.
What does hand tracking need to be like for an augmented reality headset? How fast does it need to be; do we need a hundred frames per second tracking or a thousand frames per second?
How does the field of view impact the interaction paradigm? How do we interact with things when we only have the central field, or a wider field? At what point does physical interaction become commonplace? How does the comfort of the interactions themselves relate to the headset’s field of view?
What are the artistic aspects that need to be considered in augmented interfaces? Can we simply throw things on as-is and make our hands occlude things and call it a day? Or are there fundamentally different styles of everything that suddenly come out when we have a display that can only ‘add light’ but not subtract it?
These are all huge things to know. They impact the roadmaps for our technology, our interaction design, the kinds of products people make, what consumers want or expect. So it was incredibly important for us to figure out a path that let us address as many of these things as possible.
To this end, we wanted to create something with the highest possible technical specifications, and then work our way down until we had something that struck a balance between performance and form-factor.
All of these systems function using ‘ellipsoidal reflectors’, or sections of curved mirror which are cut from a larger ellipsoid. Due to the unique geometry of ellipses, if a display is put on one side of the curve and the user’s eye on the other, then the resulting image will be big, clear, and in focus.
We started by constructing a computer model of the system to get a sense of the design space. We decided to build it around 5.5-inch smartphone displays with the largest reflector area possible.
Next, we 3D-printed a few prototype reflectors (using the VeroClear resin with a Stratasys Objet 3D printer), which were hazy but let us prove the concept: We knew we were on the right path.
The next step was to carve a pair of prototype reflectors from a solid block of optical-grade acrylic. The reflectors needed to possess a smooth, precise surface (accurate to a fraction of a wavelength of light) in order to reflect a clear image while also being optically transparent. Manufacturing optics with this level of precision requires expensive tooling, so we “turned” to diamond turning (the process of rotating an optic on a vibration-controlled lathe with a diamond-tipped tool-piece).
Soon we had our first reflectors, which we coated with a thin layer of silver (like a mirror) to make them reflect 50% of light and transmit 50% of light. Due to the logarithmic sensitivity of the eye, this feels very clear while still reflecting significant light from the displays.
We mounted these reflectors inside of a mechanical rig that let us experiment with different angles. Behind each reflector is a 5.5″ LCD panel, with ribbon cables connecting to display drivers on the top.
While it might seem a bit funny, it was perhaps the widest field of view, and the highest-resolution AR system ever made. Each eye saw digital content approximately 105° high by 75° wide with a 60% stereo overlap, for a combined field of view of 105° by 105° with 1440×2560 resolution per eye.
The vertical field of view struck us most of all; we could now look down with our eyes, put our hands at our chests and still see augmented information overlaid on top of our hands. This was not the minimal functionality required for a compelling experience, this was luxury.
This system allowed us to experiment with a variety of different fields of view, where we could artificially crop things down until we found a reasonable tradeoff between form factor and experience.
We found this sweet spot around 95° x 70° with a 20 degree vertical (downwards) tilt and a 65% stereo overlap. Once we had this selected, we could cut the reflectors to a smaller size. We found the optimal minimization amount empirically by wearing the headset and marking the reflected displays’ edges on the reflectors with tape. From here, it was a simple matter of grinding them down to their optimal size.
The second thing that struck us during this testing process was just how important the framerate of the system is. The original headset boasted an unfortunate 50 fps, creating a constant and impossible to ignore slosh in the experience. With the smaller reflectors, we could move to smaller display panels with higher refresh rates.
At this point, we needed to make our own LCD display system (nothing off the shelf goes fast enough). We settled on a system architecture that combines an Analogix display driver with two fast-switching 3.5″ LCDs from BOE Displays.
Put together, we now had a system that felt remarkably smaller:
The reduced weight and size feel exponential. Every time we cut away one centimeter, it felt like we cut off three.
We ended up with something roughly the size of a virtual reality headset. In whole it has fewer parts and preserves most of our natural field of view. The combination of the open air design and the transparency generally made it feel immediately more comfortable than virtual reality systems (which was actually a bit surprising to everyone who used it).
We mounted everything on the bottom of a pivoting ‘halo’ that let you flip it up like a visor and move it in and out from your face (depending on whether you had glasses).
Sliding the reflectors slightly out from your face gave room for a wearable camera, which we threw together created from a disassembled Logitech (wide FoV) webcam.
All of the videos you’ve seen were recorded with a combination of these glasses and the headset above.
Last we want to do one more revision on the design to have room for enclosed sensors and electronics, better cable management, cleaner ergonomics and better curves (why not?) and support for off the shelf head-gear mounting systems. This is the design we are planning to open source next week.
There remain many details we feel that would be important to further progressions of this headset. Some of which are:
- Inward-facing embedded cameras for automatic and precise alignment of the augmented image with the user’s eyes as well as eye and face tracking.
- Head mounted ambient light sensors for 360 degree lighting estimation.
- Directional speakers near the ears for discrete, localized audio feedback
- Electrochromatic coatings on the reflectors for electrically controllable variable transparency
- Micro-actuators that move the displays by fractions of a millimeter to allow for variable and dynamic depth of field based on eye convergence
The field of view could be even further increased by moving to slightly non-ellipsoidal ‘freeform’ shapes for the reflector, or by slightly curving the displays themselves (like on many modern smartphones).
Mechanical tolerance is of the utmost importance, and without precise calibration, it’s hard to get everything to align. Expect a post about our efforts here as well as the optical specifications themselves next week.
However, on the whole, what you see here is an augmented reality system with two 120 fps, 1600×1440 displays with a field of view covering over a hundred degrees combined, coupled with hand tracking running at 150 fps over a 180°x 180° field of view. Putting this headset on, the resolution, latency, and field of view limitations of today’s systems melt away and you’re suddenly confronted with the question that lies at the heart of this endeavor:
What shall we build?
Update: The North Star headset is now open sourced! Learn more ›
Needs to be smaller, as in “Ready Player One” https://uploads.disquscdn.com/images/52fd8ae645a31278eaf4373ad9a6de179de143abba417ade89a1f07cc034b3b2.jpg
That’s a VR headset, and it’s from a science fiction movie. By 2044 we’ll have much smaller form factors than that! In the meantime the aim of Project North Star is not to produce a consumer-ready headset, but to explore what a fully realized AR experience can feel like.
One of the reason I started taking eMagin a couple of years back about their 2″ panels. At the time they were way too expensive, but it seems they have brought the cost down significantly and looking forward to putting this concept Steampunk XR into reality 🙂 https://uploads.disquscdn.com/images/3e0b0e017dbff76d9d1d66627ba57d3afb2b8d24abb291d895556bc3295749bc.jpg
These designs cannot match FOV of current gen VR headsets (which are already too narrow for many people). Very hard to do that with microdisplays.
Yep, the optics are expensive requiring duplets. Also I misquoted the size as 2″ when they are in fact 1″ with a 2Kx2K resolution.
Incredible! The demo hand interfaces look better than most SciFi movies.
Do you plan to eventually start tracking head position relative to the world?
Cannot wait to hear more and to get my developer hands on this. Beautiful work so far!
Positional tracking is critical! Fortunately a lot of companies are working on solutions for this.
depends of the purpose : outside-in tracking would be relatively in a short room
whereas inside-out tracking (SLAM) would allow wider motion. So would require IMU, SLAM algorithm and why not a AI dedicated chip, like hololens with its HPU, but AI would be more generic : SLAM for tracking and relative position / head motion, and AI also for object detection, segmentation etc.
As soon as I get my hands on those open source details, there will be many a things to build 🙂 Killer work, been a big fan of Leap since day 1.
Yeah me too give you a suggestion make a spatial computing system
ARkit? That’s on iOS. So maybe send the plane detection info over to the computer. And use that to pin content for hand s to work with.
This looks just amazing, as a developer who already use LeapMotion in VR I can’t wait to test it, and develop on it ! Are you also gonna sell the new version of the Leap Motion ?
No plans to sell it on a broad level, though we do make it available for select partners. The future is embedded 😀
Good to know. I’m actually a student in last year of Master’s degree and I have just presented a student project with our own haptic gloves, using leap motion as tracking system, at Laval Virtual. Our project has generated a lot of interest and we tought about making it a real product. But we also would like to test what it is possible to achieve with the new LeapMotion. Maybe we could contact to speak about it.
Hi Hugo. May I be in touch with you. I didn’t go the the Laval Virtual festival this year, but I’d like to develop my own VR glove, not haptic but tactile ! so it would be highly complementary to your haptic glove. Here my email : gaffya (at) gmail (.)com (either French or English, depend on your location)
Please reach out to me at info at neod.io I have already been working with some other solutions to add SLAM based sensors to a few off the shelf mirrored surface based AR/XR headsets. https://uploads.disquscdn.com/images/cd472bffa9c9841fba81fa45aedead280b15ecccbf9f2f40263ef09706f6ce71.jpg If things go well we should be showing something off at AWE.
So will it be a tethered solution or standalone?
Big vertical FOV is critical for some applications, can confirm 🙂
When can we expect a developer kit?
why one or the other ? tether vs wireless. I never understood the choice ! if we can build one or the other, we can build both ! so both solution would be great in this new AR headset !
computer has both : USB & little chip for wireless, doesn’t require a lot of room.
think big : it should not only have 2 solutions : tether vs wireless, but 3 ! tether + wireless + inner hadware
adding a inner chip for specific computations, like AI chips (Qualcomm, or Intel or example) : why ?
to add the headset a inside-out tracking solution + AI solution. In some case we don’t care about those information so should stay inside the headset (eg : only 3D place and for visual stability). Sometime we only care about surfaces : tables, walls, dynamic elements… so the AI chip would only send this information (main spacial point) (and not all data) through Wifi ah-ax or 5g
tether on the other hand would be great if we think of big high quality graphics, where an external computer is required, like within a backpack.
inner hardware :
in the headset : IMU, USB (for future additional add-ons), side cameras (for side hand tracking)
(could be deported on the belt) : AI chip, USB (in & out), wireless+bluetooth+GPS chip, battery
additional : controllers
So if we can’t buy it, how do I get my hands on this? Make it? Where do I get the lenses?
now Meta is dead.
Funny on my parallel path of using off the shelf OEM headsets with both the Usens Fingo and Occpital Structure Core and iOS/Android phones and NVIDIA TX1 system. https://uploads.disquscdn.com/images/ad958064744612284f0acfc1b4346ca45fe60ea48dba8a7c299d0755327334e5.jpg
OK, I am very interested. Will this be tethered? Or will it run it’s own hardware? Or both? How does it interact with the Leap motion controls? How portable is it?
An idea that I have had for a long time would be to have software that can run any any type of computer that then treats it’s hardware as a resource to be used by an augmented reality device – not screen mirroring. So, for example, an Android app wouldn’t run a Windows PC program very well, especially not a high end one, but if the windows hardware was a resource that the Android device could use it would allow it to run the software within it’s own environment. Similarly, your smart house could be a resource accessible to the AR device allowing it to directly control hardware like light controls and such.
Of course, that probably won’t be practical for a wireless device before 5G at least, and of course there would be a need for a compatability layer for OS specific code, but it’s something I’d like to see, no pun intended.
I’d like to be able to sit at school with a Bluetooth keyboard and mouse, then call up one or more virtual windows that only I could interact with, but if another student was also wearing glasses that I could invite them to share one or more windows, so then we’d both see the same window, but we’d otherwise have our own separate AR experience. The window, in turn, would just be remote accessing my development machine at home so I have all that power in a portable package. Then when I get home, I’d sit on a couch, call up a larger screen, and start watching streaming television on a virtual monitor that only I could see. A family member could then watch it with me, or they could watch something else entirely in the same space without either of us bothering the other. If I want to see what else is on, I could call up a virtual controller. Then when I go for my evening bike ride I could call up a window slightly offset, where I could review a lecture video without otherwise being stopped from seeing where I am going, and then if I get a call I could see an offset window of the call and with a gesture or a voice accept it, and even see a video stream if they have a camera. (and on that point, cell phones should already have native video calling working between networks, but they don’t)
Regardless of how long it takes to get there, my opinion is this: The first portable AR device with 5G that is capable of, at a minimum, replacing the functions of the smart phone and tablet will be a market success. Even if the price is higher than a smart phone, the smart phone’s success has proven that people are willing to pay more for multi functional devices, and with a SIM card it means partnerships with phone companies could allow a payment plan to make it more accessible.
From there, the first AR device that does that above and is also able to function without tethering and also to function in a wireless tethered environment (cloud and personal) will essentially win the format war. Field of view is of course important, but not as important as that.
As an additional thought though, why not go the extra step and try partnering with EMOTIV to enable an AR device with built in neural controls? So you’d have hand gestures, and eye tracking, and thought commands. With such a combination even someone paralyzed from the neck down could functionally interact with people and virtual objects, but looking at a window and then thinking about opening it. Or maybe, to combine such combo with a drone controller app so that such a person could feel themselves flying freely despite being bound in place.
Also, I wonder how the gesture commands would work with people who have an extra finger, or are missing fingers. Similarly, there’s a prosthetic project called “third thumb” which gives a person a second thumb on one hand that they control with their big toe. It would be interesting to see how well such things could work in AR.
Would really like to explore this in combination with eye tracking. Had some experience ( https://youtu.be/NzLrZSF8aDM ) with leap and eye gaze in VR. But this tech for AR with hand tracking on a much wider field of view is some I do dream of.
I have two eye tracking devices – one is the FOVE VR, and the other is the Emotiv Insight. What’s interesting about the EMOTIV approach is that they achieved eye tracking without looking at the eyes at all. Unfortunately, I don’t own any VR or AR device that would be compatible with a neural interface device because of the way both needs to sit on the head. The only hope would be an integrated approach, and since they seem to want to push the boundaries I figured they’d find the idea interesting.
Suppose for example you had neural based eye tracking and thought commands, and also had a virtual menu attached to your land like they are showing above. With the combo I suggest you’d just look at the menu option you want and just think about it and then it’s selected.
Very much agreeable. Need more research!
There’s a lot to unpack here, but we can say the prototype runs on a PC but there’s no reason it couldn’t use a high-end mobile processor like a Snapdragon 835.
This is exactly what I have been looking for to marry to my Occipital CORE sensor. How can we talk about getting the STEP files and lenses?
Any reason why you didn’t mount the panels above the lenses? IP or other limitations you ran into? I have done something similar with torn apart Microsoft/HP headset with its 2.9″ panels and couple with few OEM AR headsets. Of course the smaller panels provide a smaller FOV but the proof of concept was encouraging (and built in positional tracking to boot) with my Intel NUC belt system. Seeing your prototype is very intriguing. I commented elsewhere, but would love to talk and trade notes. (sorry no images to share yet until I polish it up a little) but it should look like this version using Occipital’s Structure Core in a special housing for the Mira Prism. https://uploads.disquscdn.com/images/aef314e442617c6f4e7598e7d45a65cac570c951bde38702c918c27686ce06af.jpg
I want the big bulky version asap instead of waiting for people to build their own and start selling them. Anyone at Leap Motion want to make a few hundred on the side? =P
It seems you are mounting the panels in this fashion for the vertical orientation. It should also reduce the distance from the eyes as well. I am assuming to the light weight of the panels having the the mass offset to sides does not affect rotational inertia either. I personal feel distributing the electronics along the top of the skull reduces stress on the forehead as well, instead of mounting it along the brow. Combining the sensors and electronics on the same PCB probably reduces overall cost but and may not add that much weight to the whole experience anyway, but if you were to mount a mobile SOC and battery system, I would definitely move these towards the back.
first : thank you a lot for sharing, I’m following the LeapMotion since the first version
Designing a AR headset in order to open-source it is a big slap for big companies, like the 3+ billions Magic Leap
You aim to add Eye Tracking and other things which are smart.
Why not having glass on the side for peripheral AR view, as you tested and designed eyes to look down, why not the ability to look on the side ?
The LeapMotion is placed so that we can use our hands in front of us, but you wrote “throw things”, but we should “throw” from the side and so track the side motion : LeapMotion on the side ?
Third thing : the future is with inside-out tracking (SLAM) with dedicated hardware : why ? once again, you mentioned it : occlusion and object placement. On the other hand, I’m planning to build a tactile/haptic glove, so the ability to compute distance would be great (inside-out tracking system)
Could we subscribe to receive a mail when you will be ready to send early/dev version, or kickstater campagn ? I’ll interested, Thanks to the whole team
We have no plans to create developer kits or launch a crowdfunding campaign, but you can sign up for general updates at http://leapmotion.com/newsletter.
One of the most exciting blogs I’ve read on the internet all year. You people are my heroes. Can’t wait to try to build one in our lab.
Looks like a great halloween helmet…a cross between Lord Megatron and Darth Vader. No one is gonna wear that shit on their head.
It’s almost as though it’s a prototype designed to give the maximum possible user experience and feature set, so that the user can experience what a product from the future would be like, without consideration towards making it look nice. But that would be silly!
Awesome Sauce! pls let us know about the Opens Source info as soon as Possible would love to jump in, have two leaps and some tracking hardware we do VR games for vives and rifts and jumping to AR with this looks like the right next move for us.
When you open source the design. How can someone get his hands on one of these reflectors? Most of the makers don’t have a special-diamont-tip-cnc-super-machine.
You’d have to commission the production of one.
Can you provide details about the size/dimensions of the ellipse in figure 1? The size of the ellipse and the approximate x/y/z location of the 100x100mm window would be enough information for makers like us to begin sourcing Plexiglas to make our own 🙂
Everything will be shared next week, including CAD files and related software 😉
It’s next week now… 🙂
Leap Motion: “Everything will be shared next week, including CAD files and related software ;)”
any updates?
Wow… a month later and not a word 😛
This is some great research and prototyping. Hat’s off to you guys! How can we help, contribute or otherwise be involved with North Star?
Looks wonderful, but have you thought about gender when you designed it? Have you tried it out on women?
Scroll to the top of the blog post and you will have your answer 😛
Well, yeah, and she’s great looking, too, but did you actually do work on whether females have more difficulty than males?
Not sure why you felt the need to comment on the appearance of our senior director of communications and events, but OK.
This headset is an extremely bleeding-edge prototype and has only been tried by a handful of people within Leap Motion’s HQ, including several women. There didn’t seem to be any issues specific to the gender divide, though of course testing across a range of demographics and facial/body types would be required to make a finished product that would be as universal as possible. This is, however, anything but a finished product.
Thanks, I’m sure there are others who will be interested.
Curious if this could be implemented into a module with a single reflector spanning across a desk? Wouldn’t need to wear a headset, just have the translucent portion over a keyboard/mouse where your hand already rest. Would a flat reflector ruin the experience?
You still want different images to go to your left and right eyes so that it looks 3D. This would be tricky, but maybe possible by using autostereoscopic technology and head/eye tracking.
Was something like a pico DLP ever considered or tried vs the LCD screens and if so, what were the drawbacks that dissuaded you from using it?
It should be possible to use DLPs, LCOS, or laser MEMs scanners to make superior systems in the spirit of North Star. However, the micro-optical fabrication process assosiated with a custom optical system of this nature was beyond the scope of this project.
The goal was to start simple, figure out what’s most important, and move directly into experimenting with the software experience.
Yeah, I get the prototyping simplicity of LCDs as demonstrated in your initial model with the larger screens and while taking on a super-custom solution like laser MEMs or the like is out-of-bounds I was just thinking of something like the off-the-shelf TI evaluation units (such as http://www.ti.com/tool/dlpdlcr3010evm ). Mostly I was trying to see if there was an inherent flaw with using such an offering vs LCD that was a concern during your efforts but it seems it was more a desire to avoid major tech deviations from the initial prototype, which certainly makes sense.
It’s hard to find a LCOS/DLP/Mems equivilant of a 1600×1440 120 fps display (in terms of resolution, frame-rate and aspect ratio). It’s possible to take a 4K DLP or LCOS but it would be pretty big and not available in a small module design. We thought about it for decreasing the form factor (at expense of the specs), but we were concerned about needing additional expander or reflector optics (aside from the primary reflector optic) which would have increased the complexity and mechanical sensitivity.
Yeah, 1080P is going to be the limit for small and affordable, although it does at least offer 120FPS. I can also see the potential issue with packaging since you’d need a suitable distance between the projector and the primary reflector to achieve the display area desired without having to deal with reflected beam paths and such.
I appreciate your taking the time to talk through the decisions you made during prototyping, it’s always worthwhile to see how good teams think through problems… especially if they come to conclusions that differ from my own first-pass instincts.
So excited for this! Can’t wait to slap a vive tracker on one of these for room scale AR demos.
This is amazing and I’m so excited for the future of AR. Project North Star will absolutely change the direction of AR.
Laser mems modules are small enough to fit inside a cell phone – about the size of an eraser. Spectacular color .
Really awesome that you intend to share the hardware design open source! It was a happy surprice! Open standards, open source, open data, open research I believe is key realize the potential of AR in a great way an not fragmented walled garden way with competing platforms that are all just half good and not working well together. Open hardware could be one major enabling factor while a open AR-Cloud could be another key ingredience. Feel free to join the new https://open-arcloud.org initative to contribute to an open AR-future!
Really awesome that you intend to share the hardware design open source! It was a happy surprice! Open standards, open source, open data, open research I believe is key realize the potential of AR in a great way an not fragmented walled garden way with competing platforms that are all just half good and not working well together. Open hardware could be one major enabling factor while a open AR-Cloud could be another key ingredience. Feel free to join the new https://open-arcloud.org initative to contribute to an open AR-future!
give you a suggestion make a spatial computing system
So when will we see the Android SDK or you will not announce it at all?
When using shoe polishes or creams, for best results first apply the shoe polish or cream evenly with a brush or soft cloth. Once the polish has dried, buff to a brilliant shine using a natural bristle brush.
Care planning on selling the headset? So we can buy it? Or release instructions and possibly work with an OEM to sell the 180 degree sensor as a kit? Perhaps 3D print put it together yourself kind of thing. I’ll pay top dollar!
I have a question/ Do this kits create by unity or enother programm? If they created by unity , why yo cant run this kit by camera eng qr only ?
This is amazing!
Is it possible for a maker/hobbyist to get all of the needed parts?
I tried to build my own headset (VR) in the past, but some of the parts(like display panels) were very hard to find.
Very interesting, actually amazing. What I still don’t get is: what powers the headset? A PC? If this is the case, I’m not a huge fan of tethered AR headsets. And if it doesn’t do environmental tracking/detection/whatever, it is a less powerful tool than the competition. A nice tool for sure and I love the opensource philosophy, but I’m still not sure about what the market should be
There is no intended market for this headset, as it is first and foremost a prototype for AR UX design and exploration. But if a company wanted to take the design and productize it, the addition of sensors for environmental mapping would be trivial, as would adding a mobile processor.
Ok… got it. But… are you thinking to also develop the environmental scanning framework or is this something that a 3rd party should do?
Any updates on the release of the source files? I’m ready to start browsing aliexpress for hardware and local machine shops to make the lenses!
If you find someone who can produce the Lens i’ll be interested
However the only problem left Will be the 180° leapmotion as they said its not the og leap
I wonder if you can use 2 regular Leapmotions? I think I found the displays to use here http://confuindustries.diytrade.com/sdp/2432682/4/pd-7469939/20494992-2852789/HDMI_to_MIPI_Adapter_Driver_for_BOE_3_5_inch_LCD_D.html
Found some shops in Denver that can do optical CNC, waiting to get Source files for estimates
Mind sharing the names of the shops you found in Denver? Also, love your enthusiasm on this thread!
I’ll post up shops once they release the source, don’t want to link to shops that can’t hit the right manufacturing tolerances.
No updates just yet. We’re taking some time to add improvements to the package that were implemented on the physical prototype.
Any idea on a time scale yet!? 🙂
Ditto on the time scale! Myself and some friends here in Seattle are eager to get some homebrew headsets up and running!
Any chance you could give us manufacturer sources in advance of your full release? I like to get numbers on volume pricing in advance to evaluate a kickstarter option.
Any News?
Still 10 hours left 🙂
🙁
Where did you find the release time?
Read the blog post you are commenting: “This is the design we are planning to open source next week.”
Blogb post was published on April 9.
No updates just yet, as we’re taking some time to add improvements to the software package that were implemented on the physical prototype.
Thank you for the response. Any estimate? This week? 🙂
I just hope they don’t kill it before posting…
Still no news :(…..
Ok cool, thanks for keeping us posted. 🙂
Awesome, as for the 180° sensor, will it be possible to test with multiple current leap sensors?
Only using virtual machines; our software isn’t really designed for multiple sensors.
Cool, I found a couple projects that allow for multiple sensors on a single computer but lack documentation, would it be possible to use the current sensor until the 180 is released?
hello, Thomas, how are you?
Any news? I am diying to get those open source files.
just a new little demo on twitter https://twitter.com/LeapMotion/status/988749463215857664
Could a Kickstarter campaign be started to buy all the parts in sufficient volume and then provided as a self-assembly kit? This would get the price for individual purchase closer to scale.
That’s a great idea. If I can afford that parts, count me in!
I’d be interested in getting involved with a kickstarter, see my reply to David above.
like David Gerding, I’ll be interested in buying components, except for the frame, as I have a 3D printer at home ! 😀
You make a good point. Not everyone needs everything and modularizing this to multiple related kickstarters might be to our benefit. What if we reach out each component manufacturer to set up separate but related kickstarters?
Especially for the display and optics. In those fields I would expect the manufacturer just needs a gauranteed volume…perhaps a fullfillment house for a single volume delivery.
Any chance you could give us the component manufacturer list in advance of full release?
Any idea when to expect the next update leap!?
Would someone from Leap be willing to confirm that the hardware specs will still be posted? I’m a fan of the idea (from below) of a Kickstarter project, presumably Leap-backed, towards a DIY kit of parts. If you, too, want encourage Leap to commit to a “posting date” and support some kind Kickstarter for parts, thumbs up or whatever Disqus uses here 🙂 And thanks, Leap for cool tech. I think your focus on hands is the key to AR.
I’ve poked them a couple times through email, they are still planning on releasing, just waiting for documentation to get done (I’m not affiliated with Leap)
Wow, it definitely sounds like you know your short run hardware fabrication. I teach software development to an undergraduate department that has both game development and AR/VR and CGI programs… So I’m super interested in the project from that perspective. I’m happy to help.
I have access to 3D printers, laser cutters etc which might help with some of the kickstarter bits.
I’ll follow the thread here and try to keep up.
I’m getting into it personally, but have been following how-tos/reading up on it over the the last 10 years on sites like Hackaday + know some people at OSHPark and Autodesk that do it day in/day out. I did a deep dive on getting a setup to do a small run for a project at home finding the most reliable DIY set up that’d be the same hardware for something like this. I used to run a print cluster + taught a 3D printing class at a vocational school for a while. I also went to school for Game/VR/AR production.
Have you considered adding a liquid crystal matrix into the transparent visor so you can block incoming light and replace it with what’s being reflected? Having to add the reflection limits the applications here.
Any news?
https://uploads.disquscdn.com/images/c35645cef95ad1d0916168cdc2d4577fc1b6d73c49599121d0127b368c195bb6.jpg