16 votes

Topic deleted by author

4 comments

  1. [3]
    kfwyre
    Link
    This is a good article, and I welcome any skepticism of ed tech. From within the field, ed tech is seen as landmark and transformative, and we teachers are treated to near constant laudatory...

    This is a good article, and I welcome any skepticism of ed tech. From within the field, ed tech is seen as landmark and transformative, and we teachers are treated to near constant laudatory praises of its efficacy -- the type of positive feedback that would never be given to us, the ineffective curmudgeons ruining children's lives on the regular.

    I've been pointing out to my colleagues for some time now that education is the only institution involved with or concerned about child development which is advocating for more screen time. Outside of education, everyone else is focused on how we can get kids to have less screen time and interact more with the real world. The priorities of our educational system are seemingly at odds with the rest of the world.

    I get why this is happening, and as a teacher I can even say that I welcome a lot of the ed tech simply because it makes my job easier, but only because so much of my job has shifted from instruction to data gathering and tracking. Computer programs and content platforms excel at this, while it's nearly impossible to do it well by hand. As such, ed tech scratches an itch. It's the wrong itch to scratch, mind you, but it's the one we're told needs the attention.

    Correspondingly, this is also why administrators and districts love ed tech. It promises them a magical wonderland of data, which is the current god of education. The problem with data is that you can assess and track until the cows come home, but it doesn't actually impart any learning. That happens through teaching, and most ed tech platforms I've seen simply aren't very good at that part.

    14 votes
    1. [3]
      Comment deleted by author
      Link Parent
      1. [2]
        kfwyre
        Link Parent
        I still do some "standard" direct instruction, but tech-based instruction is very highly valued at the moment. If I were to get a walkthrough observation where I was standing at the front of the...
        • Exemplary

        I still do some "standard" direct instruction, but tech-based instruction is very highly valued at the moment.

        If I were to get a walkthrough observation where I was standing at the front of the room and talking to the class while writing on my whiteboard, I would likely be criticized for my "chalk and talk" style of teaching. The notes would say that I am not engaging all of my students or meeting the expectations of a modern classroom.

        On the other hand, if I were to film myself giving the exact same lesson and have students watch a video of it on Chromebooks, it would be seen as transformative and landmark. I would be praised for my use of technology and for engaging all students, even though the mechanism for instruction is effectively identical.

        Granted, there are some definite benefits to the video that are absent from instruction in person. Students can pause and rewind should they miss something or get confused. Headphones help isolate them from sounds and distractions from the rest of the room and allow them to focus more on the instruction.

        What I think is absent from current educational priority is any consideration for the benefits that in-person teaching has. A video does not have the ability to read the room and rephrase something that came across as confusing. It can't build a two-way relationship with students; identifying and responding to their needs. It can't facilitate discussions between kids grappling with new concepts and big ideas. A lot of teaching is effectively being a role model and a mentor via academic content, and that aspect is lost when instruction moves online.

        Khan Academy is a wonderful resource, but it is basically just video of a "chalk and talk" teacher with some practice problems and a gamification system slapped on top. It is not a revolution in education, though it certainly has its place.


        With regards to data, on paper, I am supposed to be able to report about each of my students' ability to meet each state standard. This is patently absurd, as, depending on the subject and grade level, I have anywhere from ~15-25 standards. Multiply that across my number of students, and you'll have thousands of data points, each of which needs multiple inputs to be valid. This is absolutely impossible to track by hand. In order to approach the problem pragmatically, most schools I've worked in use unit tests as an aggregate data point for a cluster of standards. Basically, one test might focus on say, solving multi-step equations or reading and analyzing informational texts, and then the student's score would be used as their performance for all standards covered by the test. I know teachers in some schools who have had to separate their tests out by standard so that each one can be individually tracked, which sounds like a nightmare to me.

        Because we can't adequately pull the data we're supposed to, we also use ed tech tools to do a lot of the data gathering for us. Thus, we have a stream of data on students' reading levels, math abilities, and so on. Some of these give us raw data on the individual student's performance (e.g. they got 80% of fraction addition problems correct) while some of these are normed based on grade level or the national population (e.g. the student is in the 76th percentile in their reading comprehension).

        Some of this data is useful, but I believe that chasing it has become a bigger priority than actual instruction or good teaching practice. Nearly all of our conversations revolve around what the data says and, consequently, how we can make future data say what we want to. Very little conversation, however, is given to the validity of the data in the first place. I had a student who I thought was making amazing progress because of the growth she was showing on one of the ed tech platforms we used. I later found out it was because she was letting her older sister do her work for her. The data was junk but I was analyzing it as if it wasn't.

        Most ed tech platforms don't have safeguards against stuff like that, and it goes beyond straight up fraudulent use like my student did. In gameified systems, students will often find ways to rack up points without actually learning anything. If they learn that they get rewarded for skills they don't know, they can tank a pre-test or start answering easy questions wrong so that the system flags the skills they know as in need of review. They can then cash in by getting everything right. Even on skills the student's don't know, they'll sometimes just burn through questions writing down the answers so they can parrot those correct answers back on the next run through.

        On multiple choice questions, students readily guess, especially if there's time pressure or competition involved and they're trying to get their answer in quickly. Even if it's something they know how to do, they often won't take the time to execute the process and get it right, as the allure of a quicker answer or beating their friend is worth more to them. As such, the platform isn't accurately measuring skill because the students aren't answering the questions with fidelity. Furthermore, in any sort of independent practice, there's no telling whether the student is using a tool to help them. Many of my students score far higher on their arithmetic skills than they should because, when they're working on their practice problems for homework, they're using a calculator. The ed tech platform then tells me that they're both accurate and fluent, able to compute on the fly, when it's really telling me that the calculator can get math problems correct, and quickly!

        I say all of this not to say that ed tech is garbage. In fact, many of the problems that I brought up have analog counterparts (turning in homework done by an older sibling is not unique to computers, for example). My problem is that the analog methods didn't give us as much data to work with, so there wasn't time lost to crunching the numbers we didn't have. But now that data is king, more data is always seen as better, and more data demands more time for analysis and more outcomes based on that data. What's lost here is that, if much of the data is junk in the first place, it not only doesn't do us any good, but it actually harms us by making us focus on, chase, and dissect red herrings.

        Years ago I was called on the carpet by an administrator because a large number of my students had gone down on one of the standardized benchmark tests we used throughout the year. Their conclusion was that I was not doing my job well and that I needed to better teach my students. In response, I took the benchmark data for my students and plotted it several cycles back, so that the last five benchmark scores were visible instead of just the two that they had examined. What that showed was that the data was highly variable, going up and down repeatedly, across all students. This showed that the test was unreliable; the retests produced highly volatile data. The shared dip in scores they found was coincidental; in any sufficiently noisy data set you'll be able to find some patterns.

        I showed this to the administrator who criticized me in an attempt to help her understand that the scores didn't even accurately reflect student ability, much less the efficacy of my teaching methods, but I wasn't heard. She didn't rescind her analysis nor her criticism of my teaching practice, and the school continued paying money to use the benchmark. It continued being a cornerstone of their data analysis. My students took the test two more times that year. I added their scores to my plot. It was as noisy as ever.

        The saddest part of this story is that it's not unique. An ex-teacher I know is fond of telling a story about an administrator who talked about the 52nd percentile being above average.1 When the teacher explained that average was actually a range which 52 is solidly in the middle of, the administrator wouldn't hear it and stuck to his understanding that the 50th percentile (and ONLY the 50th percentile) meant average. Despite this person being the administrator in charge of data analysis for the school, he had a completely fundamental misunderstanding of what he was looking at and wouldn't trust the honest voice of someone who is, allegedly, working toward the same goal as him: educating our kids.

        This is the disheartening reality in which I and pretty much every other American teacher lives. Education in America is at a place where every major stakeholder puts more stock in numbers than in the people teaching our kids. Educational leadership regularly trusts profit-seeking companies selling them weak instructional supports over the professional expertise and advocacy of their own teachers.


        1. If anyone here read the visual novel I made for Timasomo, one of the scenes was directly inspired by this.

        18 votes
        1. joplin
          Link Parent
          Wow! This was a fascinating read. Thank you for your insight. I don't have kids of my own, but do have nieces and nephews. I really hope they're getting a better education than this, but they're...

          Wow! This was a fascinating read. Thank you for your insight. I don't have kids of my own, but do have nieces and nephews. I really hope they're getting a better education than this, but they're probably not.

          3 votes
  2. mrbig
    Link
    In my personal experience, tech ed is just an excuse to save money. The systems sucks balls and a class can have up to 1000 students per instructor. That's in Brazil. Don't know how things work...

    In my personal experience, tech ed is just an excuse to save money. The systems sucks balls and a class can have up to 1000 students per instructor.

    That's in Brazil. Don't know how things work elsewhere.

    It might be good, but capitalism...

    4 votes