Webinar Ethical Considerations for System Design Part 2

good afternoon everybody we're so very excited to be presenting three of Tripoli's standards projects in the p7000 series and what I'm going to do now we have John havens on the line he is the executive director of the I Triple E global initiative on ethics of autonomous and intelligent systems and John's going to provide a brief overview of the initiative and how it inspired some of these standards awesome Thank You Justin and David and thanks to everyone really honored to be here and per what Justin said back in 2015 the I Triple E standards Association created the I Triple E Global Initiative as Justin just mentioned it that's the shorthand the I Triple E Global Initiative and there were two primary goals for the initiative one was to create a paper called ethically aligned design and we've actually posted two versions of that paper they're available for free to the public as Creative Commons documents and the logic of those version two for instance features 13 committees about 250 global experts worked to create it and they identified key issues that are of top of mind to experts in the space right now and they're in areas of like law and personal data autonomous weapons and the goal that paper was to say let's get 15 experts in each of these different groups to list the top five or ten or however many issues per each section so that other people working in the space could have some best-in-class thinking and then also they were released as requests for input documents so we wanted to make the point a the document is in draft form because it's so important we wanted to get it up and out to the general public and secondly because we wanted feedback we were very clearly stating these are not the only recommendations these are what we call candidate recommendations but we did want to get pragmatic best advice on how to actually build autonomous and intelligent systems prioritizing ethical considerations at the front end of design so what happened was another key focus of the creation of the group and the credit for this really certainly goes to Constantinus karahalios who is the I Triple E standards association managing director both for the idea from the paper and for the standards working groups through which we're talking about today the logic of ethically aligned designer or more accurately the members of the initiative the volunteers in any of the working groups the committee's creating the paper ethically aligned design there would oftentimes then be either an individual or a group a committee would say it looks like we really need to have a standard focused on X the logic being that there's enough recognition that a certain technology was ready to sort of create a standards working group and so in that case for a number of the p7000 these standards members of the initiative would then actually go to the I Triple E standards Association and create what's called a par and apply to have that then become a working group so the initiative is separate from the standards Association it's a program of the standards Association but just to be crystal clear the initiative doesn't make standards that's up to the SA and also once a standards project becomes approved like the project's we're talking about today it's not a standard and thinks about two or three years for it to be created and happen so I also want to be clear too that not all of the p7000 standards were directly inspired by ethically aligned design but when they are in the p7000 series the logic is that they are both focused on technological interoperability and key ethical considerations in regards to those technologies so it's not just for instance saying like without growth McDyess or facial recognition rather here's how the technology for facial recognition works and which is here more about later about that project which is very sad yes are all the ones today the logic is what are the key people and considerations for those things like facial recognition that we have to understand so sociologists philosophers you know people along with the Seminole need to have the engineers and the programmers are in these groups working together to make this stuff work anyway Justin thank you so much I hope that's helpful and really excited for kidding and thank you again thank you so much for providing that overview and insight on the initiative and the standards with that I'd like to turn it over to Josh who is going to speak about p7000 and 11 all right oh yeah it said my name is Josh island you can go to our first slide IND chair for p7000 11 long name standard for the process of identifying and rating trustworthiness of new sites just a little bit background about myself I coming on the next slide has my contact information so I have been working in IT for about 15 years just recently I completed a master's in public policy as part of the I guess capstone that master's I was putting together a policy paper and actually it was the the first issue of ethically aligned designed that kind of inspired some of my thought process on this so I put together a paper discussing how to deal with some of the issues we were seeing and misleading news and information that was being presented to the public from the most recent presidential election cycle and thereafter this paper was published in 2017 and presented at actually um con at the end of 2017 the result of this was the formation of our working group for the I Triple E we are relatively new we've only been together for just uh just a few months here but in that time I think we really kind of made some some significant strides so if you go to the next slide just kind of a high level of what our our working group is looking to do so you know really this idea is to create a tool for consumers of news and information to help them have a better understanding of the reputation and validity of the information they are consuming through various news outlets and that you know we met you know imagine this as being you know Twitter Facebook Google Bing it really is these portals that users go through to consume news and information and really providing a tool that can be leveraged within those frameworks that provides some semblance of an understanding of the reputation of what information is coming from did this is really kind of stemming from the belief that you know there's are there's a kind of a common refrain that encourages people to do more research before they share post click like or comment on on something that they see to look into the source of who is making that post do a fact check on a fact check on on that piece of information and and while those are all great habits to have the reality of the situation is that that's it's not a practical solution for the general public to expect that kind of effort to be put into every aspect of something that is core to most people's daily lives is is considerably onerous and many people may not even have the wherewithal or skills to perform that kind of research so the idea behind this is to create an open standard to the I Triple E by which a reputation can be established for news and information outlets and authors that can then be presented to consumers for information to help them with that process of understanding really where this information is coming from this is especially important in this day and age of increased democratization of news and information you want to go to the next silence so just as a high level again what we've been discussing about some of my evaluation criteria and how this this standard would approach this it's looking at factor affectional accuracy where that's that's possible this is probably the more difficult things to do but that is certainly a criteria we would want we will want to use the more automated or more automatable aspects would be things like bias biased language and using and developing biased language that is part of the standard usage of misleading headlines by doing comparisons of context between headlines and articles and actually content of articles the utilization in existence of rejection policies medicine may not lend itself as much to automation could be a self reporting feature or even human driven piece and it would of course apply more to larger narrative news like the Fox News or New York Times or CNN or something along those lines then the clear distinction between advertisements and content sometimes called native content when it's not even clear to a consumer for information that what they're reading is just a paid advertisement versus actual you know news reported by some sort of journalistic institution or journalist and of course I'm a rhetorical record going back backwards in time to really see what has been put together by the person or organization the next slide please so one of the biggest things that we need within our working group so we have a very diverse set of skills and a working group so far I'm very happy with the makeup we have and we have myself coming from a public policy standpoint you know an IT we have computer programmers people that work in AI automation journalists news organizations we have limited involvement from industry and industry is really going to be a vital part of this this working group and success of this standard is it's really good in the industry feedback an industry input know how to develop this it's gonna make it usable what we certainly don't want you know this really any standard to become an active academic exercise and that's why the industry involvement is going to be absolutely critical the idea for the standard is it for to be continually evolving and it's not a standard you know we release and forget about it it's our standard that will be released and updated regularly as information technology changes as word sets some taxes by assets' networking that we're not working not it's a you know a physical hardware networking networking is an the interconnection of sites and information on the Internet changes will require continuous updates of the standard and really the success of that will be driven by industry feedback adoption and input next slide please so where we are currently we have we have six subgroups that are put together dealing with what we have initially identified as the kind of big factors of the standard and help us kind of divide up what we're looking to do so scope is the one that we are most actively working on currently cuz really once once our scope is is completely defined or it's close to flay defined as we can we can achieve at this time it's really going to help us understand the rest of the aspects relating at and within scope currently are all you know uh purveyors of information you know you're like I said New York Times black News CNN any of those kind of online presence groups and we are currently discussing how far to extend Nets go go to a individual blog are we more concerned with just the author versus the purveyor those are kind of the questions that we're trying to decide which will help really frame how we approach the standard and the rest of these data provenance you know how we're how we're dealing with data consuming and what work what our approaches are trust the identity I think this is actually something we have actually had a lot of internal progress on some very I think novel ideas about how to approach this you didn't blockchain using fingerprint of authors writing which is very now I guess maybe some plays anti players major is in the software that you may be familiar with but that can really help with creating a fingerprint for an author even if he or she changes names between institutions or you're looking to kind of create a an identity and a reputation for individual and of course our criteria for text analysis the publisher analysis how what what exactly we would be using to define those reputation ratings and of course now their systems we're not we're not an island in this there's been a lot of interest in other groups there are dimensioned other tangential solutions out there that do similar things like I mentioned anti plagiarism software's a you know very interesting way to approach some of us so really could did identifying what these nominees systems are and how we can work with those groups and entities to better leverage a more comprehensive and efficient standing so our next meeting is probably going to be the second week of August and of course we encourage any participation that you may be willing to provide show up and listen to what's going on or you know feel free to provide us ample feedback where we're really looking for making this as broad and as all-encompassing as we possibly can because that's what's going to be required to make this successful thank you thank you very much for the presentation it was very uh and light for sure and I cannot echo more that anybody who wishes to join the group or other groups to get involved and we'll be posting the excuse me the contact information after the webinar is over so that way you all can blow up Josh Adina so with that I'm gonna turn it over to David to speak about 57,000 or 12 hi sorry I was buted well let's see so everybody can hear me I assume so the our working group has not perhaps gotten as far along as his previous one but we're very excited about it and it has some history prior to to starting up and I Triple E but it seems like I Tripoli may be a very good home for this activity the basic idea I'll explain in a second but but last late last year when I mean proposals were being entertained you know one of the people who have helped start our group doc Searls in particular who along with Joyce Earl's has been pursuing a project called project vrm and I don't think the initials are supposed to mean anything but they're supposed to suggest a - what a technology called CRM which is called from customer relationship management among those of us who are in the software industry that kind of idea of a customer relationship management is centered on the seller of a service and does a lot of people working in the privacy field realized that a lot of the issues of privacy measure on the individuals whose information was being feared so much on the vendors services so so there was a lot of reversing the turning the tables are these creating a pure wise structure so as part of that initiative which is been going on for several years and lots of other initiatives have been going and around that it became clear that it would you know there are some very simple standards compared to the large privacy problem that might help a lot with that effort and I'll just talk about that in in this brief introduction so if you go to the first slide basically the the purpose of a standard in this area is in in the context of a negotiation of an agreement between an individual and another individual or a company or any entity that is going to be collecting information and managing information about that individual and the idea of this standard is to enable the terms of you know of the agreement whatever policy or process the service offers or you know and so forth to be [Music] negotiated fairly from both sides so that that there really is a meeting of minds about what the information you know what uses of the information should be allowed and so forth we've all heard in the news that you know what various providers say it's wonderful everybody who signs up for our service really once all the advertising and they say that and it sounds plausible perhaps but we don't have any evidence that the individuals whose are receiving the advertisements actually wanted their you know personal information shared just says that the advertising could be targeted yeah so so the idea of a standard is basically to enable preferring terms you know what what you do as an individual want done with your information and what you don't want done with your information and allow those to be processed by Machine agents automatically because obviously it's not going to scale so the scale of network scale without having a large percentage of that activity of just exchanging terms and negotiating terms automated as I said purpose is to basically balance the scales between the you know one one side and the other often one side is that could be thought of as a customer you know of the company but it need not always be quite that model so the basic idea is that a lot of agreements are our two party agreements and the first party in the agreement and the second party have to come to terms very much like a contract in common law it's negotiated and in particular you know there's only been a lot of work in both policy communities and you know it various kinds of Technology and standards and norms about what are called privacy policies that is the documents that are posted on websites and so forth we're not really focusing on addressing privacy policies at all or policies themselves or unilateral take it or leave it terms on the other hand they're bilateral so take it to the next slide so so why why do we need a standard I've hinted it that a little bit the the fundamental reason is the internet and and the scale of networking that we have today at general because this can potentially go beyond the internet almost any kind of member situation it doesn't come with a notion of privacy defined in it in fact the internet scope doesn't even mean you know describe the idea of collections of information you know databases we've had databases since the 1970s that are maintained and there are privacy standards that are associated with database medical databases insurance databases financial databases that have been around for a long time but with networking we have a new opportunity and that new opportunity is that unlike past business relationships that the Internet has encouraged to peer-to-peer kind of style of architecture and so we have peer-to-peer protocols and you know both low levels like tcp/ip and at higher levels like the World Wide Web family of protocols HTTP and HTML 1 you know the REST API standards and so forth and also email standards and those are really positioned to support two party agreements and actually support some kind of handshake that might allow for an agreement to be reached apparently simply I want these terms you want those terms you know we agree on the subset I'm you know we're done so the norm up to today is that the operators of services proper policies as I mentioned before you know that are just a list of terms and they're typically specified in war early language which to technology person and a standards person is pretty darn big and defined in terms of a external context that has nothing to do with information processing systems and those policies also authorize future changes to be made to those terms and say how they might be made whether notices provider demotic cetera and at the end of the day the individuals who are accepting you know is wanting to use the service have one choice they could either accept the policies or not use the service and in fact they're not just accepting the policy they're accepting any future changes to the policy as well to some extent so so this has been standardized you know and it's a well-established body of law and so forth but we can do better in a peer-to-peer world and we can do better technologically the reason why we need a standard is because once you take away the communication problem and you know everyone's reachable on the internet every interaction is is two directional we still don't have any standards or how what an agreement might look like when its terms might look like and so forth so the lack of the standard is the last step to getting to a new way of doing business and so given that we can implement fairly intelligent notions of agency I can have my little wallet if you will or or some software that runs on my cell phone that basically can read a machine readable set of terms proffered by a vendor and can tell you which ones I might want to think hard about and which ones I never really I've already decided I don't care much about then those agents can simplify the process but again we need some kind of common standard and if we want to implement this at scale and you know the internet now has huge scale we need standardized machinery doable terms as part of that so that's what this project is about we can draw on a lot of history because first of all there's a lot of legal history about what terms might make sense there's also a lot of technical history about negotiating various kind of terms and there are even projects that are I've either been tried and not worked out so well or they are actively being worked on but have probably won't have the leverage that an I Triple E standard would have so go to the next slide and on the real you know the crucial thing about a standard is whether it gets adopted you can have a standard and no one will adopt it and you know that's that's particular problem because it just sits on the shelf and similarly you could have a very complex standard and only a very small subset of it gets adopted for example as we see in many technical standards so to really succeed in creating a standard that gets used you need to focus on the stakeholders and and we really think there are our two major stakeholders the first parties in the second parties and those are individuals and various kinds of service providers and also the technologists that provide some context so you know the challenge of a standards group addressing individuals you know is is part with one of the things that makes our group somewhat unusual is that we are trying to bring in a variety of entities that are not typically members of I Triple E hopefully some of them will join the Triple E you know and participate in some of our working groups some of them are people who are in I Triple E that have outside interests in privacy stuff and those those now rather than have all the individuals of the world we hope those people will represent the interests of individuals fairly and accurate testing has become a useful standard the secondary stakeholders are our companies and Industry really should care about this they one reason that they're starting to care is gdpr which um enforces a bunch of principles and rules that are not machine readable and not always clear on entities that hold a lot of personal data and that's caused a lot of angst and people you know are already seeing the impact of that since it went into effect a couple months ago but the impact so far has been at the policy level you get a pop-up that says you agreed to the policy you agree to the fact that we do cookies on our site and our policy is X and that's about the amount of automation that's there behind the scenes and I'm none of us are aware of this to satisfy the GDP our requirements involves a lot more than that so for example in Europe you now are able to go to a company and say give me every in every piece of information you have about me and you could go to that company even if you haven't agreed to personal information policies that means that companies have to index and maintain all their data about an individual much more carefully than they used to and perhaps in a more unified way and there's a lot of changes going on in that space as well but the next step after that is figuring out when they introduce a new service whether they have to get you know additional agreements from the people that are are captured in that data and so forth so that there's a real industry stake here and you know the variety of ways industry can deal with mistake my personal view coming from industry is that in the end of the day something like gdpr is going to become the norm if not the law in most most places you know companies are going to be struggling with whether they can be required to do that but they're also going to be struggling with the question of how to do it if they you know they're either forced to or their customers really think that so and then the third part of the stakeholders is what I would call the context people one and the context people are focused on on those companies that provide the technology that supports this lots of companies hold information but the information is killed or you know in websites the contractors on the clouds you know with systems that have features and you know and so forth that provide you know what ways to hold personal information and dispense it and so forth those contexts technology providers whether they're cloud entities or browser manufacturers our browser vendors those those folks who make browsers are going to be having to provide some kind of hooks or standards like this and then you know so that's one part the second part is is the legal industry if you will that and the policy world that has to deal with norms of laws around such systems and what you know discuss whether their particular laws should be applied you know understand the relationship of the laws to the technology and the standards that are in the system and so forth so that's basically what what we want to have involved in this process because to get a standard adopted you really need all affected parties to buy in and be willing to adopt it and use it from day to day so for example there was a closely related idea many years ago probably to you know an idea way too early called p3p which was part which was a web standard for stating privacy requirements if you will in the HTTP and HTML protocols it it's well elaborated luck rated protocol it was only implemented as far as I know by Microsoft's browser by the other browsers and it wasn't widely adopted you know not because people didn't think it was necessarily a good idea but no one was paying attention to the serious question of adoption and getting all the stakeholders in the same room discussing things now things matter a lot more it's a much larger much more serious part of our business world and hopefully and and some of the flaws in p3p which are technical potentially technical flaws can be remedied well because there are a lot more eyeballs on a problem so go to the next slide and David we are gonna ask this is Justin we are gonna have to move on to Ericsson so if you could just please that's the best point we find you know you can look at this my Butley and get a sense of the desirable properties and I'm happy to answer questions afterwards Thanks thank you so thank you for the presentation I I mean I know myself I had a whole bunch of questions I would have more discussions I would have liked to have had about it but in the interest of time I'm going to be moving on to Eric in a moment but I did also want to remind all the participants please feel free to chat questions to David and we will do our absolute best to get that there so so with that I'm going to move on to Pia 7013 with Eric hi everyone so this is Eric Clinton no I'm the vice chair of the working group for inclusion and application standards for automated facial analysis technology so go ahead to the first slide so I just want to talk about the chair of the working group who started this process joyed while I'm Winnie and she she's a graduate student at MIT right now and is sort of leading the charge to bring attention to issues in the accuracy and fairness and appropriateness of the use of face recognition technology both by by governments by companies and individuals so so she started this process and recruited me I'm happy to be a part of it and well you know one of her major motivations was that as she was doing some work in graduate school and using various off-the-shelf face detectors she found that they frequently wouldn't see her face at all they would basically do to the darkness of her skin and they were and and in the application since she was doing this was perhaps more of a nuisance but when when you start deploying face recognition in in other areas like law enforcement and and various financial filtering applications and so forth these these gross differences in performance across subgroups can really create serious problems and have serious negative effects of course they're also lots of positive effects so part of our standard is about you know laying out the the risks and benefits of these new technologies as they're deployed next slide I just tell you a little bit about myself I believe one of the earlier slides said I was a professor at MIT if they have a job for me I'll definitely consider it but right now I'm I'm at UMass Amherst and and which is also great computers on at school and my my research interests are in computer vision and machine learning and statistics I've been working in face recognition since about 2005 in areas like detection verification recognition and alignment and well one of the reasons I've also done quite a bit of work and publishing datasets and databases for face recognition such as the labeled face in the wild which is probably the most most widely used database for testing face recognition algorithms right now and also we published at UMass the face detection database and benchmark which is a benchmark for face detection and issues of how you build your databases and benchmarks are of course important when you're trying to develop standards for that value weighting face recognition technology next slide so our goal really is to produce a playbook for decision makers who are considering adopting facial analysis technology and just some examples of this but no means exclusive would be face recognition for policing expression recognition for hiring of face verification for access to mobile devices like you see on our iPhone 10 or to financial services like acting as a password to your bank account face detection and tracking for consumer apps and facial attribute classification for insurance quotes and all of these applications are either already deployed or in consideration for deployment now and of course they're there many others as well next slide so of course as everyone knows face recognition and fist analysis technology is being more and more widely deployed across all kinds of different sectors and and they're growing concerns about potentially unlawful use of face detection or or unregulated use where say for example there's a website called Big Brother Watch in UK where they report on various deployments by police in the UK in which large numbers of people are are under surveillance basically with face automatic face recognition algorithms and people even if they're Mis recognized may be put in and face databases that are were originally created for for felons or criminals and so forth so there are many issues there there's another website by the Georgetown Law School called the perpetual lineup report that discusses similar issues and there's researchers like the chair of our working group joy and her her co-author have done studies showing that subgroups are the the recognition of standard off-the-shelf technology is often significantly worse for some subgroups than others so for example if you're trying to identify the gender of a subject they found that the accuracy rates were much much lower for people with dark skin than for light skin and and recognition rates and so forth have been shown to be lower for various subgroups possibly based on how the algorithms were trained or other aspects so so those are important issues that are coming up more and more and we want to try to adjust those and make recommendations to people who are trying to evaluate their run software and also create open standards so that people whose faces are being recognized and evaluate the technology and understand what to expect from it my next slide so just a couple examples of how widespread the technology is becoming 130 million US adults are indexed in various law enforcement databases and really there's very little regulation currently about how these faces are used who can be stored and what sort of what kind of accuracy the algorithms have that that are being used on these databases and used to match people with these databases next slide this is an example from the from the big breath or watch a webpage in which they they did a study on a deployment by the South Wales police force in the UK and they determined that 91 percent of the matches reported by the system were incorrect and wrongly identified innocent people as matching with criminals in a database and subsequently more than 2400 people innocent people whose photos were taken and stored in the database for subsequent matching so so we're not necessarily trying to define the rules at the moment but we're we want to promote transparency and establish standards for the way these kinds of technologies are being used next slide so we want to choose three particular scenarios for study in our standard it's it's very early in our work on the standard so all of this is subject to change but we're looking at having three possible standards for now this one in law enforcement one in consumer products and one in business operations such as like whether somebody's face can be automatically analyzed to determine whether they should receive a loan or not and we really want to provide clear discussions of the risks and benefits of using the technology in ways that have been you know well established but also other other aspects that are perhaps more speculative next slide and you know for those who haven't been reading or thinking about this a lot this is just a collection of some of the things that could happen what you believe it or not there are people out there who are publishing scientific papers that say they can tell your IQ just by looking at your face these are somewhat controversial papers and you can imagine that if these things are are used but they could be used by an insurance company to decide what your premium should be and and things like this so I think it's obvious that there are there are serious potential downsides of this technology being misused and so we want to identify these possible individual harms or collective social harms and identify them and and establish guidelines for for how this technology should be used next slide so just a summary of where we are right now once again joyed while i'm with me is our our chair and the leader of this effort and i've joined her recently she was sorry she couldn't be here today she's on a transatlantic flight as we speak and and but we're looking for additional participation we've got a pending call for participate we have where we really want to get strong industry participation of course and we've got verbal commitments from many tech companies as you can see here and also people from the Georgetown Law Center for technology and privacy so that's what I have today and thanks for inviting me to participate in this Thank You Erika I really I appreciate your your presentation and I did have one quick follow-up question for you given that we have not yet approached 50 percent of the world's population connected to the Internet and that typically you know the data that is used to fuel these technologies are from shall we say you know industrialized countries for the most part how how is how did that I was not going to impact and affect those who have not yet come online who made different areas of the world and have different yeah yeah that's a great question so so first of all III I didn't put in any details of more specific issues were considering for the standard but one some of the things that that are very important to us are are things like the the intended use of a particular face recognition technology so for example if I've trained my face recognizes or with a database of Caucasians and then I deployed in Beijing for access control in an office building in Beijing it may not work very well and so one of the things we want to do is establish very much like the pharmaceutical industry we want to establish a notion of intended uses and indications and contraindications for a given software product so if your product has been tested on certain populations and shown to achieve a certain level of accuracy on those populations then it would be I don't want to say approved for use but satisfactory for use within those populations but if you took it to a completely different population for which it had not been evaluated it's really not ready for deployment there or at least if you deploy it there you you you can't really claim to have to be able to predict its performance in in those situations so I'm a big fan of trying to define intended use and and specific things that something is is not for another example would be so the database I developed labeled face in the wild has has virtually no children in it and the reason for that are sort of a long story but obviously if you train a face recognizer on on our database you would not necessarily expect to tour terribly well in recognizing children and so people understand these things in in in in the technological world but that doesn't mean that there are standards for describing the intended use of your software and so forth so so if you go to a place that that hasn't had such technology deployed before you really need to look at the population of interest are people wearing you know wearing something on their heads whether it's turbans or head scarves or something that's gonna completely change the way an algorithm works in a particular scenario and so our emphasis is really going to be on on openness and clarity be we rather than necessarily trying to define rigid rules of upfront but yes all I'll pause there I think that's a part of the answer to your question thank you thank you I appreciate it and an interest of time I'm gonna have to move on and I'm going to I do have a question for David one of the one of the items you were discussing is that feel that for example GPR type legislation the legislation you're gonna is gonna become pervasive and in that aspect we have had some laws passed or under consideration here in the United States such as the California consumer Privacy Act and I was wondering how you think those those types of laws effective affect your work in this space actually this is a good opportunity if I I'm a lot on the call and can talk for doc Searls to answer a question doc is also in our group yeah I've can you all hear me yes okay good yeah I'm sitting at a Starbucks in Eugene Oregon and this traffic going by so little riding viewed what the main effective of the GDP are of a privacy in Europe and now the pending legislation and regulation in California is really to awaken those operating the servers in the world that David described well earlier to new possibilities we're proposing here and working on here I think can really provide gdpr compliance and compliance with a lot of these other regulations almost always with the development of these regulations the assumption is the whole agency lives on the server side but fortunately say in the case that you are it defines the data controller as a natural person as well as it can be a natural business pose of a machine and machines can either party so so but you know we've been working on this but we had needs like Joyce and I and logic PRM which David mentioned earlier which he's been a deserted Klein Center Harvard since 2006 we began during developers around [Music] around developing tools that make individuals online both independent that better able to have gauge for a long time the ad blockers tracking protection and some other tools are examples of that we worked with browser makers especially Mozilla on on approaches to this but it really hasn't been until the threat of the GDP are now the GDP are being in force so it hasn't really been entered yet it's been enforced but that enforced it came into effect and baked 25th of this year it hasn't been until that that it's really upset the applecart for for the companies you know operating servers out there and and their first response has been to tune produces really annoying cookie notices that don't even make distinctions between cookies that are friendly in first party on their part where it's from the site's own purposes which are the original purpose of of cookies as they were designed by them ultimately it all Netscape them 1994 we're all that all they do is through there were statements sort of walk route I'm on the busier side and the third-party cookies that are busy spying on these are very very different things you know it's sort again the difference between keys in her pocket and ordinance you can step on to blows up I mean it's really advanced difference and that difference is blurred and in the way these cookie notices are deployed so I think that we're we're really very advantage right now by this by the regulation coming along but I think it's Port Commission as well but we have that legislation and regulation in the absence of tech on our side that that was really an oversight mistake on the on the technology developers part but we can remedy that now and it really will help to have standards that for example the browser makers do hp1 how's that answer the question yes thank you very much I appreciate your we did um I know we're running running very close on time and so I'm gonna read a question we received from from the audience from Bill Radcliffe the question is are you considering the relationships / impact of one working group to another for example P seven thousand eleven and seven thousand thirteen and I'm not sure maybe which one of you would like to take that one so I don't and maybe even we do have John havens on the line as well maybe they may want to take a stab at that question well I'm sure whenever we've discovered that there's overlap between our projects we do try to work together we do have coordinators within the 7000-series that to help us make sure we're talking to each other I think I just find that there's not enough overlap between 11 and 13 that we consider combining that but of course all the resources it could be shares to me and again the 7000 series does have oversights as far as a management to kind of make sure that we're all you know if there is opportunity for us to work each other that we're identifying that and leveraging the best we can thank you and with that we are at the top of the hour and so what I'd like to do is thank all of the panelists for speaking your information was very insightful and it's fascinating to learn more about these these projects and I'd also like to thank the audience for participating the question and it's appreciated

Loading